With AI applications becoming ubiquitous in and out of the workplace, can the technology be controlled to avoid unintended or adverse outcomes? Organizations are launching a range of initiatives to address ethical concerns.
With great power, the saying goes, comes great responsibility. As artificial intelligence (AI) technology becomes more powerful, many groups are taking an interest in ensuring its responsible use. The questions that surround AI ethics can be difficult, and the operational aspects of addressing AI ethics are complex. Fortunately, these questions are already driving debate and action in the public and commercial sectors. Organizations using AI-based applications should take note.
Explore the Signals for Strategists collection
Subscribe to receive related content
Download the Deloitte Insights app
A growing number of companies see AI as critical to their future. But concerns about possible misuse of the technology are on the rise. In a recent Deloitte survey, 76 percent of executives said they expected AI to “substantially transform” their companies within three years,7 while about a third of respondents named ethical risks as one of the top three concerns about the technology. The press has widely reported incidents in which AI has been misused or had unintended consequences.8
The conversation about responsible AI is hardly limited to concerns about controversial applications of the technology, such as automated weapons. It also considers how the infusion of AI into common activities such as social media interactions, credit decisions, and hiring can be controlled to avoid unintended or adverse outcomes for individuals and businesses. The discussion around AI and ethics has grown far more urgent in the last decade or so, and many initiatives to tackle ethical questions surrounding AI have taken shape in the last couple of years. This is primarily driven by recent advancements in AI technologies, growing adoption, and increasing criticality of AI in business decision-making.
It’s worth noting that concerns about the ethics of technology generally and AI specifically are nothing new. The topic was explored at least as far back as 1942 when science-fiction writer Isaac Asimov introduced his Three Laws of Robotics in a short story.9 In 1976, a German-American computer scientist suggested that AI technology should not be used to replace people in positions that require abilities such as compassion, intuition, and creativity.10 Still, today’s AI presents enormous opportunities for businesses while introducing some novel risks that need to be managed.
Some of the ethical risks associated with AI use differ from those associated with conventional information technology. This is due to a variety of factors, including the role played by large datasets in AI systems, the novel applications of AI technology (such as facial recognition), and the capabilities that some systems demonstrate, from automatic learning to superhuman perception. As MIT professors Stefan Helmreich and Heather Paxson note, “Ethical judgments are built into our information infrastructures themselves. That’s what AI does: It automates judgments—yes, no; right, wrong.”11 Prominent issues associated with ethical AI design, development, and deployment include the following:
Bias and discrimination. AI systems learn from the datasets with which they are trained. Depending on how a dataset is compiled or constructed, the potential exists that the data could reflect assumptions or biases—such as gender, racial, or income biases—that could influence the behavior of a system based on that data. These systems’ developers intend no bias, but many have reported AI-driven instances of bias or discrimination in application areas such as recruiting, credit scoring, and judicial sentencing.12 Organizations need to ensure that their AI solutions make decisions fairly and do not propagate biases when providing recommendations.
Lack of transparency. It is natural for customers or other parties affected by technology to want to know something about how the system that affected them works—what data it is using and how it is making decisions. However, much AI development entails building highly effective models whose inner workings are not well understood and cannot be readily explained—they are black boxes. Techniques are emerging that help shine light inside the black box of certain machine learning models, making them more interpretable and accurate, but they are not suitable for all applications.13 Ethical AI use takes into account a responsibility to be transparent about the workings of systems and the use of data wherever possible.
Erosion of privacy. Many companies collect large quantities of personal data from consumers when they register for or use products or services. That data can be used to train AI-based systems for purposes such as targeted advertising and promotions and personalization. Ethical issues arise when that information is used for a different purpose—say, to train a model for making employment offers—without users’ knowledge or consent. A recent study highlighted that 60 percent of customers are concerned about AI-based technology compromising their personal information.14 To build customer trust, companies need to be transparent about how collected information is being used, create clearer mechanisms for consent, and better protect individual privacy.
Poor accountability. With AI technologies increasingly automating the decision-making process for a wide range of critical applications, such as autonomous driving, disease diagnosis, and wealth management, the question arises who should bear responsibility for the harm with which these AI systems may be associated. For instance, if a self-driving car doesn’t stop after seeing a pedestrian and hits the individual, who should be held responsible: the car manufacturer, passenger, or owner? Existing accountability mechanisms for IT systems fail to adequately address such scenarios. Businesses, governments, and the public need to work toward establishing proper accountability structures.
Workforce displacement and transitions. Companies are already using AI to automate tasks, with some aiming to take advantage of automation to reduce their workforces. In the 2018 Deloitte executive survey, 36 percent of respondents saw job cuts from AI-driven automation rising to the level of an ethical risk.15 Even jobs that are not eliminated may be impacted in some way by AI. Employers should find ways to use AI to increase opportunities for employees while mitigating negative impacts.
The increasing adoption of AI technologies, and growing awareness of various ethical risks associated with them, calls for urgency in designing approaches and mechanisms to deal with those risks. Governments, technology vendors, corporates, academic institutions, and others have already started laying the foundation for ethical AI use.
Many of the technology vendors creating AI tools and platforms are also at the forefront of ethical AI development efforts. Major technology companies including Google and IBM have developed ethical guidelines to govern the use of AI internally as well as guide other enterprises.16 For instance, while releasing its ethical guidelines, Google pledged to not develop AI specifically for weaponry, or for surveillance tools that would violate “internationally accepted norms.”17 Additionally, many technology vendors have launched or open-sourced tools to address ethical issues such as bias and lack of transparency in AI development and deployment. Examples include Facebook’s Fairness Flow, IBM’s AI Fairness 360 and AI OpenScale environment, and Google’s What-if tool.18
Governments and regulators have already begun to play a crucial role in establishing policies and guidelines to tackle AI-related ethical issues. For instance, the European Union’s General Data Protection Regulation (GDPR) requires organizations to be able to explain decisions made by their algorithms.19 This is just an example from the growing list of national governments—such as the United States, the United Kingdom, Canada, China, Singapore, France, and New Zealand—that have released AI strategies, road maps, or plans focusing on developing ethical standards, policies, regulations, or frameworks.20 Other notable government initiatives include setting up AI ethics councils or task forces, and collaborating with other national governments, corporates, and other organizations.21 Though most of these efforts are still in initial phases and do not impose binding requirements on companies (with GDPR a prominent exception), they signal growing urgency about AI ethical issues.
Universities and research institutions are playing an important role as well. Not only do they educate those who design and develop AI-based solutions—they are researching ethical questions and auditing algorithms for the public good. A number of universities, including Carnegie Mellon and MIT, have launched courses dealing specifically with AI and ethics.22 MIT also created a platform called Moral Machine23 to crowdsource data and effectively train self-driving cars to respond to a variety of morally fraught scenarios. Indeed, ethics was a central theme at the recent launch of MIT’s new Schwartzman College of Computing.24 Moreover, academics are getting seats on AI governance teams at many technology companies and other enterprises as external advisers to help guide the responsible development of AI applications.25
Consortia and think tanks are bringing together technology companies, governments, nonprofit organizations, and academia to collaborate on a complex and evolving set of AI-related ethical issues, leverage each other’s expertise and capabilities, and simultaneously build the AI ecosystem. One such consortium is the Partnership on AI, which counts 80-plus partner organizations.26 Companies across sectors are working to adopt ethical AI practices such as establishing ethics boards and retraining employees, and professional services firms are guiding clients on these issues.27
Technological progress tends to outpace regulatory change, and this is certainly true in the field of AI. But organizations may not want to wait for AI-related regulation to catch up. To protect their stakeholders and their reputations, and to fulfill their ethical commitments, organizations can do many things now as they design, build, and deploy AI-powered systems.28
Since any AI-related ethical issue may carry broad and long-term risks—reputational, financial, and strategic—it is prudent to engage the board to address AI risks. Ideally, the task should fall to a technology or data committee of the board or, if no such committee exists, the entire board.
Designing ethics into AI starts with determining what matters to stakeholders such as customers, employees, regulators, and the general public. Companies should consider setting up a dedicated AI governance and advisory committee including cross-functional leaders and external advisers that would engage with stakeholders, including multi-stakeholder working groups, and establish and oversee governance of AI-enabled solutions including their design, development, deployment, and use. With regulators specifically, organizations need to stay engaged to not only to track evolving regulations but to shape them.
AI developers need to be trained to test for and remediate systems that unintentionally encode bias and treat users or other affected parties unfairly. Researchers and companies are introducing tools and techniques that can help. These include analytics tools that can automatically detect how data variables may be correlated with sensitive variables such as age, sex, or race; tests for algorithmic bias that may generate decisions that are unfair to certain populations; and methods for auditing and explaining how machine learning algorithms generate their outputs. Companies will need to integrate new technologies, control structures, and processes to manage these risks.29 Organizations should stay informed of developments in this area and ensure they have processes in place to use them appropriately.
In an era of opaque, automated systems and deepfakes (realistic but synthetic images, videos, speech, or text generated by AI systems),30 companies can help build trust with stakeholders by being transparent about their use of AI. For instance, rather than masquerade as humans, intelligent agents or chatbots should identify themselves as such. Companies should disclose the use of automated decision systems that affect customers. Where possible, companies should clearly explain what data they collect, what they do with it, and how that usage affects customers.
Whether AI will eliminate jobs or transform them, it is likely that the technology eventually will affect many, if not most, jobs in some way. An ethical response for companies is to begin advising employees on how AI may affect their jobs in the future. This could include retaining workers whose tasks are expected to get automated or whose work will likely entail using automated systems—or giving them time to seek new employment. Companies in technology, financial services, energy and resources, and telecom have already started preparing their employees to stay relevant in AI-driven future.31
New technology always brings benefits and risks, and AI is no different. Wise leaders seek to balance risks and benefits to achieve their goals and fulfill their responsibilities to their diverse stakeholders. Even as they seek to take advantage of AI technology to improve business performance, companies should consider the ethical questions raised by this technology and begin to develop their capacity to leverage it effectively and in an ethically responsible way.