balloons

Analysis

Ethics in AI

Elevating and addressing the essential issues

The powerful maxim "with great power comes great responsibility" holds significant meaning to any business leader or technologist working in artificial intelligence (AI). As AI transforms markets and becomes a part of our daily lives, discussions around how new technology can be used and should be used are commonplace—and often spirited.

April 16, 2019

A blog post by Vic Katyal, principal, Deloitte & Touche LLP.

Executives recognize the importance of addressing the ethical issues surrounding AI in their organizations. A 2018 survey by Deloitte Consulting LLP of 1,400 US executives about AI found that 32 percent ranked ethical issues as one of the top three risks of AI. They noted a range of concerns, including fears that AI could help create or spread false information, misuse personal data, or include unintentional bias.

Until the regulatory environment catches up, leaders of companies in the AI space are charged with recognizing and taking control of the ethical aspects of AI. Recently, Tom Davenport and I collaborated on an article for MIT Sloan Review, Every leader’s guide to the ethics of AI, written to guide executives responsible for implementing and managing AI in their organizations. If you’re wondering where to begin, here are some of the specific considerations discussed in the article:

  • Elevate the issue. Since an AI mishap could have a significant impact on a company’s reputation and value, many organizations are recognizing that how big data and AI are used should be a board-level concern. Understanding the objectives for using AI and keeping the focus on desired outcomes helps guide the conversation. Some companies have formed governance and advisory groups made up of senior C-suite leaders. Others are appointing industry leaders and AI subject matter specialists to serve on an ethics board.
  • Avoid bias in AI applications. Unintentionally, some AI applications put certain groups at a disadvantage, even when the creators of the algorithms have not intended any bias or discrimination. The issue of algorithmic bias has surfaced in a variety of situations, including judicial sentencing, credit scoring, education curriculum design, hiring decisions, even digital advertising. Organizations should develop a set of risk management guidelines to help reduce bias within their AI or machine learning algorithms.
  • Disclose AI use. Many companies are letting customers know when they are interfacing with AI, whether an intelligent agent or chatbot, or a machine learning model. The adoption of the General Data Protection Regulation in Europe sets a good example: Customers have the right to an explanation about how their data is being used. This disclosure could be balanced by an explanation of how disclosing personal data can generate value to the customer and to other stakeholders (e.g., sharing data about drug efficacy can lead to better outcomes for other patients).
  • Honor privacy concerns. As AI technologies are increasingly used in marketing and security systems, consumers may begin pushing back in situations where they feel their privacy is being encroached upon. For retailers who use browsing history to personalize ads, utilizing a pop-up message that says “our website uses cookies” could be an effective precautionary measure. Since using AI to identify fraud and data breaches can result in a fair number of false positives, some individuals may be unfairly accused or inconvenienced. These situations may warrant additional human investigators to help avoid undesirable situations.
  • Reassure employees. Early concerns about AI replacing human workers may have diminished, but employees remain concerned about how their jobs may change in the future. In many cases, they can benefit from machines working alongside and helping them to work smarter and more efficiently. Leaders can help employees feel more comfortable with AI by providing training and helping them to acquire new skills.
  • Apply AI to augment human ability, not replace it. Humans working with machines can be more powerful than humans or machines working alone, and most of today’s AI technologies rely on human assistance to perform the tasks they are designed to do. In many situations requiring experience or intuition, human problem-solving cannot be replaced by AI—but it can be used to help humans respond with greater insight.

While many companies are just beginning their AI journeys, now is the time to consider the ethical issues. Take a deeper dive into the issues of bias, privacy, and security in my recent article with Tom Davenport, or contact me if you’d like to discuss.

Insert Custom CSS fragment. Do not delete! This box/component contains code needed on this page. This message will not be visible when page is activated.
+++ DO NOT USE THIS FRAGMENT WITHOUT EXPLICIT APPROVAL FROM THE CREATIVE STUDIO DEVELOPMENT TEAM +++

Site-within-site Navigation. Do not delete! This box/component contains JavaScript that is needed on this page. This message will not be visible when page is activated.

Fullwidth SCC. Do not delete! This box/component contains JavaScript that is needed on this page. This message will not be visible when page is activated.

Did you find this useful?