Ethics in AI has been saved
Ethics in AI
Elevating and addressing the essential issues
As AI transforms markets and becomes a part of our daily lives, discussions around how new technology can be used and should be used are commonplace—and often spirited.
April 16, 2019
A blog post by Vic Katyal, principal, Deloitte & Touche LLP.
Executives recognize the importance of addressing the ethical issues surrounding AI in their organizations. A 2018 survey by Deloitte Consulting LLP of 1,400 US executives about AI found that 32 percent ranked ethical issues as one of the top three risks of AI. They noted a range of concerns, including fears that AI could help create or spread false information, misuse personal data, or include unintentional bias.
Until the regulatory environment catches up, leaders of companies in the AI space are charged with recognizing and taking control of the ethical aspects of AI. Recently, Tom Davenport and I collaborated on an article for MIT Sloan Review, Every leader’s guide to the ethics of AI, written to guide executives responsible for implementing and managing AI in their organizations. If you’re wondering where to begin, here are some of the specific considerations discussed in the article:
- Elevate the issue. Since an AI mishap could have a significant impact on a company’s reputation and value, many organizations are recognizing that how big data and AI are used should be a board-level concern. Understanding the objectives for using AI and keeping the focus on desired outcomes helps guide the conversation. Some companies have formed governance and advisory groups made up of senior C-suite leaders. Others are appointing industry leaders and AI subject matter specialists to serve on an ethics board.
- Avoid bias in AI applications. Unintentionally, some AI applications put certain groups at a disadvantage, even when the creators of the algorithms have not intended any bias or discrimination. The issue of algorithmic bias has surfaced in a variety of situations, including judicial sentencing, credit scoring, education curriculum design, hiring decisions, even digital advertising. Organizations should develop a set of risk management guidelines to help reduce bias within their AI or machine learning algorithms.
- Disclose AI use. Many companies are letting customers know when they are interfacing with AI, whether an intelligent agent or chatbot, or a machine learning model. The adoption of the General Data Protection Regulation in Europe sets a good example: Customers have the right to an explanation about how their data is being used. This disclosure could be balanced by an explanation of how disclosing personal data can generate value to the customer and to other stakeholders (e.g., sharing data about drug efficacy can lead to better outcomes for other patients).
- Reassure employees. Early concerns about AI replacing human workers may have diminished, but employees remain concerned about how their jobs may change in the future. In many cases, they can benefit from machines working alongside and helping them to work smarter and more efficiently. Leaders can help employees feel more comfortable with AI by providing training and helping them to acquire new skills.
- Apply AI to augment human ability, not replace it. Humans working with machines can be more powerful than humans or machines working alone, and most of today’s AI technologies rely on human assistance to perform the tasks they are designed to do. In many situations requiring experience or intuition, human problem-solving cannot be replaced by AI—but it can be used to help humans respond with greater insight.
While many companies are just beginning their AI journeys, now is the time to consider the ethical issues. Take a deeper dive into the issues of bias, privacy, and security in my recent article with Tom Davenport, or contact me if you’d like to discuss.
A webinar series on data science, artificial intelligence, and design thinking
What does responsible AI look like—and who owns it?