Digital Ethics: making AI work for the good of humanity


Digital Ethics: making AI work for the good of humanity

“We all have a stake in how these technologies interact with society, and it’s up to us all to define what is 'just'.” TU Delft’s TPM-AI Lab hosted “The (In)Justice of AI” conference; among the speakers was Digital Ethics Lead Hilary Richters, who presented industry perspectives on this hot topic.

As a professional services provider, Deloitte interacts with a wide range of companies, and aspires to help them transition to more responsible business models. As such, digital ethics are very much a part of its Risk Advisory practice, says Richters. “When it comes to data and technology, we help clients develop a responsible strategy that is right not only for them, but also for society, and particularly for people that can be impacted in a negative way by new technologies - the underdogs.”

Doing business responsibly depends on the ability to build trust with stakeholders, and digital ethics is at the heart of that. “Digital ethics involves the translation of the moral values of the company and its stakeholders into business requirements when using data and technology,” Richters explains.

The evolution of responsible business

Responsible business practices used to be focused on managing risk, but these days trustworthiness is key. Why? With the emergence of new technologies, and AI in particular, the risk landscape has changed. We are faced with new risks, and the risks we know are harder to identify. An example is the ability of AI to learn continuously from new data, and to make decisions driven by complex statistical methods rather than clear and predefined rules. “If AI technology like facial recognition is used to drive decisions, the companies using it will come under increasing pressure from the public to be trustworthy. Because with this new technology it‘s no longer you, your company and your clients that can be at risk, it’s the whole of society.”

Building trust in the digital space is done not only with words but with actions. By demonstrating transparency, accountability, fairness etc. And not just once, but over and over. Ultimately, though, responsible business will be defined by the value a company creates for all stakeholders. This trend is already visible, says Richters. “Think, for example, of the pledge signed last year by 181 CEOs to prioritise customers, employees, suppliers and their communities along with shareholders. Meanwhile, many companies these days are communicating their ‘data principles’, or working with academia to create responsible AI solutions.”

The justice of AI

The justice of AI lies in its value for businesses as an engine of (revenue) growth, brand recognition and efficiency. AI is used across all industries in HR, marketing and fraud monitoring, as a predictive tool to create insights on specific targets and aid decision making. AI has more sector-specific applications, too. Financial services companies use it to trace customer behaviour, track transactions, identify suspicious activities and/or enable flexible pricing. Consumer-oriented businesses in media, telecommunications and retail use AI to reduce churn risk and increase consumer loyalty. Technology companies run AI based platforms to provide personalised services. AI is also invaluable to the public sector, including science and healthcare. “In short,” says Richters, “AI adds major value and is here to stay.”

The injustice of AI

On the other hand, AI carries major risks if not tightly managed. Risks that challenge our democratic values and freedoms, and even our fundamental human rights. One is that AI may replicate and amplify existing social bias, leading to discrimination and greater inequality. On social media platforms, AI-informed recommendations may narrow the choice of what users get to see, thereby limiting the opinions they are able to form. If unchecked, it has the potential to create and spread misinformation. Meanwhile, privacy is under threat because of the deep insights that AI can obtain about individuals, their behaviour and their personalities.

AI without transparency also undermines the trust people place in companies. The more complex the algorithm, the harder it is to explain the outcomes to stakeholders. “When AI systems enter decision making chains, or take over from human decision making,” Richter adds. “it can become unclear who is accountable and where liability lies.”

Ethics mitigates risks

The function of Digital Ethics, therefore, is to harness the benefits of AI while mitigating the risks, says Richters. However, even companies using AI ethically cannot avert all the risks by themselves. We all have a role to play, Richters believes. “Change will happen when the leaders of every stakeholder group truly understands that all these issues are interconnected. It’s a matter of connecting companies, governments, communities, regulators etc. When we talk about stakeholder value, we need to realise that we all have a stake in how these technologies interact with society. And it’s up to us all to define what is 'just'. If we acknowledge the different roles, responsibilities, benefits and harms of each stakeholder then we can find a way to make AI work for the good of humanity.”

Watch out for our upcoming blog series on the upcoming EU regulation ‘Trustworthy AI’ and it’s implications across industries.

For more information, please connect with Hilary Richters.

Please fast-forward to minute 29:33 to listen to Hilary Richters’ point of view

Did you find this useful?