5 June 2018 Tailored to the People

Tada in practice – Jim Stolze and Aigency

Douwe Schmidt
  • Follow us on Facebook
  • Follow us on LinkedIn
  • Follow us on Twitter

“In 15 years, I want to be able to explain to my son what we did to keep artificial intelligence understandable, so his generation will also have control of the systems that are making decisions then.” The person speaking is Jim Stolze, co-founder of Aigency: a platform that connects students and companies that want to work with artificial intelligence. Besides AI for companies, “we also ensure, via AI for Good, that AI can also be used by NGOs and municipal authorities,” Stolze says, “because AI is too important to leave it entirely up to companies. If that happens, it will become a perversion.”

Like most new technologies, artificial intelligence is only accessible to an elite. Artificial intelligence depends on large data sets and complex analysis technology. As a result, the barriers to gaining access are high, and there is a risk that platform monopolies will emerge.

Stolze acknowledge that this is currently the case, but it is changing very rapidly: “Eventually, even the corner bakery will be able to start using artificial intelligence. The source materials, the data sets and the tools are quickly becoming more accessible. Nearly everyone will be able to use them within five years.”
“We recently started using a tool called OpenML. That is a new, automated form of data science that allows us to do analyses in half an hour now that would have taken us a month before.” But often what you gain in time is lost in terms of comprehension. A solution proposed by an AI is not always understandable and is, as Stolze explains, a definite requirement.

“The developments in the AI field are currently moving so quickly that we’re in danger of derailing. Calculating capacity and data sets are growing exponentially, but the human ability to adapt is not. If we do not impose a number of good frameworks and principles for AI now, we’ll be racing off in the wrong direction.”
“I tell students: since they’re not spending a month on analysis, just half an hour, they can easily spend two weeks on finding out and explaining why their solution works. A solution isn’t a solution if you don’t understand how it works. You always have to be able to explain why an AI has discovered something.”
Contemplating the ethics – the why, the how, and whether it’s a good idea at all – is now the biggest challenge in the world of machine learning and artificial intelligence, according to Stolze. “I was at a symposium on artificial intelligence yesterday, in the Royal Palace on Dam Square. It was attended not only by the King and Queen, but also by scientists and people from the business community. The discussion was mainly about technology, or about dangers that are still very distant (after 2040). Somehow, what we can do today in the Netherlands is hardly being discussed at all.”

A good example from actual practice is ING Bank. They have an ethics board that meets once a month and really has the mandate to reject project proposals. We need more examples like this.

That’s also why I support Tada. Tada is a compass and a starting point for conversation, and every company should define their own direction from there. Just saying Don’t be evil isn’t enough anymore.

Watch more: https://youtu.be/mjvA6EclGQM

Leave a Reply