Taking a proactive approach to the ethics of AI has been saved
Perspectives
Taking a proactive approach to the ethics of AI
Ethical challenges should demand attention right from the start of AI initiatives
Early adopters of artificial intelligence (AI) must confront an array of potential risks, including ethical concerns. Just as many companies actively address cybersecurity from the beginning of projects, ethical risks deserve the same take-charge approach.
A recent Deloittes global study of AI adopters revealed that executives increasingly recognize the importance of AI, including its ability to deliver competitive advantage and empower their workforce.1 Nearly two-thirds (63 percent) say that AI technologies are “very” or “critically” important to their success—and that is expected to reach 81 percent in two years.
As with any emerging technology, the upside brings with it potential perils. AI adopters should confront and manage cybersecurity vulnerabilities, worries about making the wrong strategic or mission-critical decisions based on AI recommendations, regulatory non-compliance risk, legal responsibility, and ethical risks. As hands-on experience grows, companies feel better prepared to address these risks: Only 23 percent of those that have built one or two full AI implementations report feeling “fully prepared,” and that number rises steadily to 69 percent among those that have built 20 or more AI systems.
A paradox arises: Even as experience and preparedness increase, overall concern about AI risks continues to grow. Specific worries follow different trajectories. For those just getting started with AI, cybersecurity vulnerabilities are paramount, with nearly six in ten respondents rating it a top-3 AI concern (see the figure). Ethical concerns are least worrisome, rated as a top-3 concern by just a quarter of executives. With greater experience, cybersecurity worries diminish—but ethical concerns ramp up. For companies that have built 11 or more AI systems, ethical risks are almost as concerning as cybersecurity.2
The vibrant public conversation around ethical considerations is raising awareness.3 Some key ethical risks include potential bias in AI decision making, lack of explanation for AI decisions, use of personal data without consent, job cuts due to automation, and using AI to create falsehoods ("deep fakes"). Some governments are springing into action by establishing AI task forces and ethics policies, and collaborating with corporations. Many major AI technology vendors are creating ethics guidelines, establishing AI governance teams, and releasing tools to address bias and explanations.4 Several universities are researching ethics issues, setting up AI ethics courses, and sitting on corporate governance teams.5
What can AI adopters do to conquer their ethical worries? The approach toward cybersecurity may offer a valuable lesson: Nearly half of executives say their company proactively integrates cybersecurity planning into their AI projects—and the worries decrease with experience. AI adopters should be similarly proactive in addressing the ethical aspects of their AI initiatives—right from the start and throughout the product lifecycle, not as an afterthought. Reputations and customer trust hang in the balance.
Deloitte’s recent AI ethics paper offers practical advice. Engage with the board and stakeholders to establish governance of AI initiatives. Interact with regulators to help shape evolving regulations. Integrate emerging tools and processes that help detect bias in AI data and algorithms, and train developers to identify and remedy issues. Be transparent with customers about where AI is used and how data is being collected and used. Prepare employees for how AI may affect their jobs. In short, take a holistic and proactive approach that puts AI ethics front and center.
This charticle authored by Susanne Hupfer on May 8, 2019.
Endnotes
1 Jeff Loucks et al., “Future in the balance? How countries are pursuing an AI advantage,” May 2019.
2 Other risks we surveyed (erosion of customer trust from AI failures, regulatory non-compliance risk, making the wrong strategic decisions based on AI recommendations, failure of AI in a mission-critical or life-and-death context, legal responsibility for AI decisions/actions) remain statistically unchanged when viewed against AI experience.
3 Deloitte Quid analysis indicates discourse on AI and ethics doubled from 2017 to 2018, as reported in Schatsky et al., “Can AI be ethical? Why enterprises shouldn’t wait for AI regulation,” April 2019.
4 Deep learning AI systems can behave like “black boxes,” giving answers or recommendations without being able to explain how they were computed. See Jason Bloomberg, “Don’t trust AI? Time to open the black box,” Forbes, September 16, 2018.
5 For descriptions of the ethical risks of AI systems and examples of corporate, government, and academic activities, see Schatsky et al., “Can AI be ethical? Why enterprises shouldn’t wait for AI regulation,” April 2019.
Recommendations
AI ethics: A business imperative for boards and C-suites
What does responsible AI look like—and who owns it?