Article

The challenges of artificial intelligence

Models may be more accurate than humans, but they will still make mistakes

As Edison once said: “Whatever the mind of man creates, should be controlled by man’s character”. But how can this be achieved if the creation is the result of complex self-learning algorithms? During the most recent Supervisory Board Debate, Catelijne Muller and Evert Haasdijk shared their views on the opportunities and challenges of artificial intelligence. One of the statements up for debate was: “Every supervisory board needs to have a member with artificial intelligence in his or her portfolio.”

Deloitte Supervisory Board Debate | AI in the Boardroom

‘Yes’ is the answer of part of the public to the latter. Because society is changing more rapidly than ever; the developments in A.I. influence our daily lives more and more. If you don’t know anything about that as a supervisory board member, you should ask yourself what you are doing in the board. Somebody needs to know what the opportunities and the threats of A.I. are for the company. You need to have a broader vision, but someone has to thoroughly understand how the systems work. But ‘No’, conclude almost as many participants: a supervisory board shouldn’t look at just one member when it comes to A.I. That is dangerous. The board as a whole should know enough about the subject. The essence of a supervisory board is that you take decisions collectively. According to Evert Haasdijk, Senior Manager and AI Expert at Deloitte, you can easily buy real A.I.-knowledge as a board member. You don’t have to personally know the ins and outs of the latest techniques. You just need to know what you don’t know.
 

Data aren’t facts
Both keynote speakers emphasize the importance of the explainability of A.I. models. Catelijne Muller, member of the High Level Expert Group on AI of the European Commission and President of ALLAI Netherlands gives an example of a malfunctioning algorithm, including an American 17-year old girl that was arrested for stealing a bike as a first offense and labeled as a ‘high risk’ recidivist, and a man with a long criminal record caught shoplifting labeled ‘low risk’. The man was white, the girl was black. It turned out the A.I. system labeling prisoners wasn’t that smart, and the data that it used to learn weren’t that good either. Muller: “’Data’ isn’t a synonym for ‘facts’. Data can be messy, they can be biased, they can be incomplete. So no, I’m not scared of A.I. becoming too smart and taking over the world. I’m more afraid of the A.I. that is too stupid and has already taken over the world.”
 

Human-in-command
Muller: “These systems effect our lives so deeply, they’d better be beyond good. After all, besides judging prisoners, AI systems also decide on mortgages, health insurance, where kids go to school, et cetera. That is why we set standards. We need to make sure they are safe, bias-free, complete and that they can’t be hacked.” Plus: there need to be more people at the table, according to Muller. If you design an algorithm to send someone to prison, you might want to talk to a judge, or to a district attorney. Employers and employees should sit at the table and talk about how A.I. can be implemented in the workplace so it can augment the people who work there. The entire society should join the discussion around A.I. Muller pleads for a human-in-command-strategy for A.I. in a technical matter, but also in deciding if, when and how we use it in our lives, our homes, our schools, our workplaces, our courtrooms and our hospitals.


“I’m not scared of A.I. becoming too smart and taking over the world. I’m more afraid of the A.I. that is too stupid and has already taken over the world.”


Explainable and transparent
We need to understand individual decisions made by a model, adds Evert Haasdijk, “Computer says ‘no’” is not a valid explanation. For models may be more accurate than humans: they will still make mistakes. We can create systems that can make decisions very quickly, but they don’t have any conception of the context they are used in. A good example of where this could lead to is (Microsoft) Rich Caruana’s story about a model predicting mortality for patients suffering from pneumonia, using a neural network. It was very accurate, but he still advised doctors not to use it. Because another explainable model came to the conclusion that people with asthma had a low probability of dying when they contracted pneumonia. This odd conclusion was reached because patients with asthma that catch pneumonia are usually sent to the ICU immediately, and that reduces their chance of dying. So we need human intelligence, in this case doctors, to use their common sense. And therefore we also need to detect bias, to find out when a model’s decisions are wrong and why, and, as I said earlier, to explain individual decisions. By inspecting the models, by statistical analysis and by transparent approximations. Haasdijk: “We need transparency, because we need human insight in the implications of A.I. models and self learning machines. We need a methodology that encompasses these requirements.”
 

“We need transparency, because we need human insight in the implications of A.I. models and self learning machines. We need a methodology that encompasses these requirements.”
Did you find this useful?

Related topics