Europe’s “Trustworthy AI” meets AI Ethics has been saved
Europe’s “Trustworthy AI” meets AI Ethics
The ethics of AI
“Trustworthy AI” is a term coined by the high-level expert group (HLEG) advising the EU on artificial intelligence. We asked Prof. Dr. Aimee van Wynsberghe, a member of the HLEG, what Trustworthy AI means and what the implications are for EU businesses.
Go directly to
- What does it mean for AI to be trustworthy?
- What does it mean for companies?
- What challenges do companies face?
- What ethical risks are companies exposed to?
- What are the most exciting developments right now?
The term “Trustworthy AI” was first coined by the high-level expert group (HLEG) on artificial intelligence, an independent body set up to advise the European Commission. Prof. Dr. Aimee van Wynsberghe, Humboldt Professor in Applied Ethics of AI at Bonn University and S Edge Fellow, Deloitte Center for the Edge, was a member of the HLEG. We asked her what Trustworthy AI means and what the implications are for EU businesses.
Discover the blog series
What does it mean for AI to be trustworthy?
The term “Trustworthy AI” can be applied to the processes for creating, implementing, and regulating AI. To me, it’s about trusting the humans who are making and governing the technology, and not as much about trusting the technology itself. It’s about the struggle of harnessing what is good about AI while at the same time protecting people, respecting their human rights and societal values. To achieve that, we have to understand AI’s ethical risks now so that we can prevent them from materialising in the future. Artificial intelligence has been around for many decades and yet we’re still learning about what it can do and what kind of impact it will have on society. In some parts of the world the approach has been to act first and think later. It soon became clear that AI has great potential for good, but also potential to do considerable damage. The HLEG developed a vision of how the EU could be different in this respect. With a human-centred and values-based approach. To actually make this happen, though, you need not just principles, but also procedures, assessment tools and incentives, and this is exactly what the HLEG provided in our reports. Many of these insights were used by EU policy makers to write a white paper with proposed regulation to ensure Trustworthy AI.
What does it mean for companies?
There are companies who are committed to responsible business practices and have voluntarily chosen a responsible approach to AI. They’re in favour of regulation, as it will create a level playing field and eliminate unfair competition. However there are also companies and sectors that fear regulation will restrict their entrepreneurial style and stifle innovation. In either case, companies should consider AI ethics as a starting point for achieving Trustworthy AI. To be sure, ethics and regulation are not the same. Ethics is voluntary while regulation is not. So for companies that want to show their commitment to doing the right thing when it comes to AI, incorporating AI ethics into the fabric of their organisation is a strong start.
What challenges do companies face?
For all companies, it’s a challenge to start preparing to comply with regulations that are still in the draft stage. My tip for them is to invest in AI ethics. That will give them a better understanding of the ethical risks that they run themselves or expose others to. For example, if they have bought AI models from another company to use in their own, do they know how the model was trained, what kind of training data was used, if/how the model was validated? These are the kinds of questions that AI ethics addresses and that AI regulation will be addressing.
Another challenge for companies is to know where to begin. Who in their organisation is going to take ownership of AI ethics? Do you assign it to your privacy or cybersecurity team? To HR? AI ethics transcends departmental silos. It’s about understanding the new risks associated with this technology. What new forms of discrimination may be introduced when historical data is used to train an algorithm, or when companies use facial and emotion recognition in job interviews? We need to create new positions in companies to address these, and many other, AI-related issues. We’re talking about the ethics of a new and emerging digital technology – a technology that in some cases is a black box to society and even its creators.
What ethical risks are companies exposed to?
First, companies should understand that designing, developing and using AI in an ethical way is the right thing to do. And that consumers are paying more and more attention to companies who are thinking about this. Companies run the risk of overstepping ethical lines with their AI activities, either intentionally or unintentionally. When things go wrong, they’ll be blamed if they didn’t identify and address these risks beforehand. Their reputation will suffer. Another risk has to do with our lack of experience with AI. Because we’re still learning how it works and how deep the impacts will go. Companies can and should help uncover and understand these risks. As I see it, the ethical risks of AI are like the layers of an onion, and we’re at the first layer now. For example, we’ve discovered only recently how AI, if trained on historical data, can reinforce existing discrimination in selection procedures. As we learn more about AI in society, we will peel back more layers of ethical issues. Five years from now, for instance, another layer of issues will be about the environmental impact of AI. I think the carbon footprint of AI will also be a factor in deciding which AI applications companies can afford to do ethically. If companies are a part of understanding this impact, we have a chance at mitigating the negative environmental impacts sooner rather than later.
What are the most exciting developments right now?
AI has very exciting potential. In healthcare, for example, AI is being used to analyse moles and determine whether they’re malignant or benign. The software has been trained with millions of pictures and is sometimes already better at it than a human dermatologist. But if patients don’t trust the way it was developed, are they going to trust their diagnosis? I don’t think so! Trustworthy AI is essential if we’re truly going to reap AI’s benefits. This makes it all the more exciting that the EU has chosen to become a front runner in Trustworthy AI.
What would you advise companies to do?
Start looking into the ethics of AI and how it might be relevant for you. Browse through podcasts and blog posts, attend conferences. Reach out to experts, like academics or consultants. Corporate consultancy teams, such as Deloitte, have been following this topic very closely and have already made strides addressing the key issues. There are tools out there that companies can use to support their data ethics processes and build an ecosystem of trust with their stakeholders. So I’d say, seek advice before you try reinventing the wheel.
Trustworthy AI is very much on the radar of Deloitte’s Digital Ethics team. Whatever question you have about AI, the ethical implications or the related risk and compliance issues, it is one we have already asked ourselves. As you embark on your AI adventure, we can support your company with expert advice and state-of-the-art tools.
As such, Trustworthy AI fits into Deloitte’s broader ambition to do responsible business and help its clients do the same. The Deloitte network is committed to driving societal change and promoting environmental sustainability. Working in innovative ways with government, non-profit organisations, and civil society, Deloitte is designing and delivering solutions that contribute to a sustainable and prosperous future for all.
For more information please contact Hilary Richters or Annika Sponselee via the contact details below.