split image chemical structure and color explosion

Perspectives

AI ethics: A business imperative for boards and C-suites

What does responsible AI look like—and who owns it?

Will artificial intelligence (AI) help us or hinder us? AI as a problem-solving tool offers great promise. But cyberattacks, social manipulation, competing financial incentives, and more warn of AI’s dark side. For organizations expecting AI to transform their companies, ethical risks could be a chief concern.

What are AI ethics?

AI is a broad term encompassing technologies that can mimic intelligent human behavior.1 Four major categories of AI are in increasingly wide use today:

  • Machine learning: The ability of statistical models to develop capabilities and improve their performance over time without the need to follow explicitly programmed instructions.
  • Deep learning: A complex form of machine learning used for image and speech recognition and involving neural networks with many layers of abstract variables.
  • Natural language processing (NLP): A technology that powers voice-based interfaces for virtual assistants and chatbots, as well as querying data sets, by extracting or generating meaning and intent from the text in a readable, stylistically neutral, and grammatically correct form.
  • Computer vision: A technology that extracts meaning and intent out of visual elements, whether characters (in the case of document digitization) or the categorization of content in images such as faces, objects, scenes, and activities.2

Ethics is “the discipline dealing with what is good and bad and with moral duty and obligation,” as well as “the principles of conduct governing an individual or a group.”3 In commerce, an ethical mindset supports values-based decision making. The aim is to do not only what’s good for business, but also what’s good for the organization’s employees, clients, customers, and communities in which it operates.

Bringing together these two definitions, “AI ethics” refers to the organizational constructs that delineate right and wrong—think corporate values, policies, codes of ethics, and guiding principles applied to AI technologies. These constructs set goals and guidelines for AI throughout the product lifecycle, from research and design, to build and train, to change and operate.

Deloitte Dbriefs webcast

AI ethics: An emerging imperative for the board and C-suite
Join us to learn about AI ethics as a boardroom topic and how leadership can proactively mobilize to respond.

Date: Wednesday, September 4
Time: 2 p.m. ET

Click here to access the recording

Considerations for carrying out AI ethics

Conceptually, AI ethics applies to both the goal of the AI solution, as well as each part of the AI solution. AI can be used to achieve an unethical business outcome, even though its parts—machine learning, deep learning, NLP, and/or computer vision—were all designed to operate ethically.
 
For example, an automated mortgage loan application system might include computer vision and tools designed to read hand-written loan applications, analyze the information provided by the applicant, and make an underwriting decision based on parameters programmed into the solution. These technologies don’t process such data through an ethical lens—they just process data. Yet if the mortgage company inadvertently programs the system with goals or parameters that discriminate unfairly based on race, gender, or certain geographic information, the system could be used to make discriminatory loan approvals or denials.
 
In contrast, an AI solution with an ethical purpose can include processes that lack integrity or accuracy toward this ethical end. For example, a company may deploy an AI system with machine learning capabilities to support the ethical goal of non-discriminatory personnel recruiting processes. The company begins by using the AI capability to identify performance criteria based on the best performers in the organization’s past. Such a sample of past performers may include biases based on past hiring characteristics (including discriminatory criteria such as gender, race, or ethnicity) rather than simply performance.
 
In other words, the machine learns based on the data that it processes, and if the data sample isn’t representative or accurate, then the lessons it learns from the data won’t be accurate and may lead to unethical outcomes. To understand where ethical issues with artificial intelligence could arise and how in the future of work those issues might be avoided, it helps to organize AI along four primary dimensions of concern):

four-dimensions-diamond-future-of-work.jpg (500×500)

  • Technology, data, and security. Look at the organization’s approach to the AI lifecycle from an ethical perspective, including the ways it builds and tests data and models into AI-enabled solutions. Leadership in this dimension comes from the organization’s information, technology, data, security, and privacy chiefs.
  • Risk management and compliance. Find out how the organization develops and enforces policies, procedures, and standards for AI solutions. See how they tie in with the organization’s mission, goals, and legal or regulatory requirements. The heads of risk, compliance, legal, and ethics play a role in this dimension.
  • People, skills, organizational models, and training. Understand and monitor how the use of AI impacts the experiences of both employees and customers. Continuously assess how operating models, roles, and organizational models are evolving due to the use of AI. Educate all levels of the workforce and implement training initiatives to retool or upskill capabilities. Establish protocols to incentivize ethical behavior and encourage ethical decisions along the AI lifecycle. In this dimension, the human resources function shares responsibility with learning and development teams, ethics officers, and broader executive leadership.
  • Public policy, legal and regulatory frameworks, and impact on society. Finally, develop a sense of AI’s place in the business environment. This includes the level of acceptance AI has in government and culture. It also includes the direction that laws and regulations are taking with regard to AI. Apply this information to the effect AI is likely to have over time in terms of education, employment, income, culture, and other aspects of society.

The chief information officer, chief risk officer, chief compliance officer, and chief financial officer have leadership roles across the first three dimensions, while the fourth dimension relies on leadership from politicians, regulatory agencies, and other policymaking bodies.

Back to top

Examples of AI ethics risks

Ethical considerations exist along each stage of the AI lifecycle.

Lifecycle stage Examples of risk Tools for managing risk
Research and design The solution's inherent risks (such as a computer vision application that captures and potentially misuses customers' or employees' images or other personally identifiable information) are overlooked during the design phase.
 
The solution’s intended purpose isn’t aligned with the organization’s mission and values.
A framework for what defines the ethical use of AI and data in the organization.
 
A cross-functional panel with representation from the business units as well as the technology, privacy, legal, ethics, and risk groups.
 
A data or AI ethics code of conduct by which professionals must abide.
Build and train The organization lacks appropriate ways to secure consent from individuals whose data is used to train the AI model.
 
A programmer builds bias (either consciously or unconsciously) into a model intended for an AI-enabled solution, such as in the personnel recruiting example previously described.
 
The data used to train an AI model has quality issues.
A process for determining where and how to obtain the data that trains the models.
 
Guidelines on where and how user consent becomes a consideration in the training phase.
 
Policies for where and how to build models and whether to use open source technology.
 
An assessment of ways that an AI solution can teach itself behaviors out of synch with the organization’s mission or values.
Change and operate A chatbot (an AI application that can include cognitive language capabilities) learns new behaviors that are inappropriate or offensive to customers.

The organization is unable to quickly assess which new data sources an AI-enabled solution has recently accessed on its own.

The organization lacks the ability to test and monitor AI solutions.
A process for the organization to engage in continuous monitoring.

An assessment of ways that an AI-enabled solution can gain access to new forms of data.

A process for the business to update the board on AI-related risks or issues.

Organizational thresholds or tolerance levels to help determine whether to decommission an AI-enabled solution.
Tools for managing risk

First steps along the AI ethics journey

AI ethics is a sweeping endeavor with many moving parts. At the same time, technology aside, the initial approach should follow a similar path as other ethics and compliance programs, including:

It’s easy to get caught up in the complexity of AI. But starting with the basics can create near-term impact while offering maximum room to learn as you go. Over time, the organization can integrate the finer nuances of AI ethics as its implications—for the organization and stakeholders alike—become known.

Learn more: Visit the Notre Dame Deloitte Center for Ethical Leadership

The Notre Dame Deloitte Center for Ethical Leadership (NDDCEL) is a collaboration between the University of Notre Dame and Deloitte.

Having identified a shared value of personal integrity in today’s business world, NDDCEL came together to advance the understanding and implementation of ethical leadership practices in the corporate sphere.

Today, with the increasing use of AI-enabled solutions, ethics and leadership are entering uncharted territory. The Center is currently focused on developing a mutual understanding of ethical issues, challenges, opportunities, and solutions related to AI and analytics. As part of this initiative, the NDDCEL provides executives and business leaders with opportunities to learn more about this rapidly evolving topic, connect with like-minded peers, and help develop the next generation of business leaders by translating research insights into leading practices.
 
To learn more, please visit the NDDCEL’s website.

Addressing the AI ethics imperative—everyone’s responsibility

With its machine learning, deep learning, NLP, and computer vision capabilities, AI offers the exciting prospect of improving the human condition. But there is a potential dark side to AI that’s hard to ignore.
 
The result is a new frontier in business ethics. Those involved with the advancement of AI—including corporate boards, management teams, researchers, and engineers—face a growing imperative to bring an ethical lens to what they design and build. This approach should be articulated through organizational ethical constructs that apply throughout the AI product lifecycle. To be effective, each construct should reflect an understanding of AI-related vulnerabilities along the previously described four dimensions of risk.
 
It’s too soon to tell where this journey will lead businesses and their customers. But the way forward is in sight. It begins with a top-level commitment to ethical leadership and a focus on technical and ethical literacy. With ethical guardrails to limit missteps along the way, everyone in an organization can work together to produce solutions that represent AI’s potential.

eye

Endnotes

1 https://www.merriam-webster.com/dictionary/ethic.

2 Op cit, “State of AI in the Enterprise, 2nd Edition.”

3 https://www.merriam-webster.com/dictionary/ethic.

Get in touch

Mary Galligan
Managing director
Deloitte Risk & Financial Advisory
Deloitte & Touche LLP
mgalligan@deloitte.com

Vivek Katyal
Principal
Deloitte Risk & Financial Advisory
Deloitte & Touche LLP
vkatyal@deloitte.com

Maureen Mohlenkamp
Principal
Deloitte Risk & Financial Advisory
Deloitte & Touche LLP
mmohlenkamp@deloitte.com

Courtney Parry
Senior manager
Deloitte Risk & Financial Advisory
Deloitte & Touche LLP
cparry@deloitte.com
 

Christopher Adkins
Executive director
Notre Dame Deloitte Center for Ethical Leadership
University of Notre Dame
cadkins1@nd.edu

 

Fullwidth SCC. Do not delete! This box/component contains JavaScript that is needed on this page. This message will not be visible when page is activated.

Did you find this useful?