As leaders grapple with the cascade of decisions associated with artificial intelligence’s impact on their organizations, one of the challenges they face is fostering trust in their AI models and implementations. Without thoughtful design and implementation that ensures equitable access and value across the organization, AI’s perceived role could quickly shift from ally to adversary. Diversity, equity, and inclusion leaders could help ensure that equity remains a business priority amid the enterprisewide focus on other AI issues, including risk mitigation, governance, and compliance.
Deloitte’s DEI Institute conducted a targeted, cross-industry survey of 71 chief DEI officers (CDEIOs) or equivalent leaders in March 2024 to better understand how organizations are utilizing their DEI leadership to inform the development of AI strategies and models. While 78% of CDEIOs surveyed agree or strongly agree that their organization continues to uphold its commitment to DEI alongside investments in AI, the survey also reveals that some organizations are falling short when it comes to embracing the practices that allow DEI to inform AI strategy (figure 1). Where are these disconnects, and how can DEI leaders step in to influence how AI is created, developed, and managed with equity in mind?
While 97% of human resources leaders in a Harvard Business Review study say their organizations have made changes that are improving DEI outcomes,1 only 35% of CDEIOs in the Deloitte DEI Institute study agree or strongly agree that their board and C-suite leaders understand the need for DEI strategy to continue to evolve alongside AI. DEI leaders are in a unique position to bring alignment between AI and DEI outcomes to ensure that their organizations continue to prioritize equity-focused commitments. For example, consider a scenario in which a chief DEI officer is incorporated into the development process of an AI tool prior to its launch. Their unique vantage point, particularly linked to demographic data from racially and ethnically diverse populations, can empower them to identify data quality risks that could be overlooked by others—for example, due to their proximity to the data.
Incorporating a broad range of diverse perspectives into the AI life cycle—from ideation and development to deployment and assessment—is critical to minimize biases and other potential pitfalls. For example, consider a customer service chatbot that’s programmed to converse in colloquial or conversational English. The user communicates by entering text, and the responses are delivered via a simulated voice. At a glance, this application appears to be a straightforward tool to facilitate customer engagement. However, what happens if the customer struggles with typing due to accessibility issues or has auditory challenges that make it difficult to understand the voice outputs? What if English is a second or less familiar language for the customer who, therefore, finds it challenging to engage with the tool? The inequities in value that can arise from AI deployment may not always be readily apparent. Recognizing these inequities necessitates contributions from stakeholders encompassing diverse backgrounds and life experiences.
Organizations may encounter potential risks if they omit ethical safeguards and accountability mechanisms for AI. Consider the challenge of talent acquisition and sourcing for skills over experience. A data science team may create and deploy an AI-enabled candidate screening tool to support efficiency. The technologists who build and train the model may not have the background or insight to identify the ways in which such an application could create bias. This could, in turn, lead to biased decision-making and potentially unfair hiring practices. Chief DEI officers are uniquely positioned to advocate for increased transparency and impact assessments of AI systems, but according to our research, few organizations seem to be incorporating equity-focused accountability measures.
Organizations that want to prioritize equity in AI likely will need to focus on areas where disconnects between DEI strategies and AI practices are evident. While DEI leaders are uniquely positioned to help resolve these disconnects, collaboration across the C-suite will be important to successfully building trust. C-suite leaders and CDEIOs might want to consider the following ways of collaborating to foster more equitable AI.
Ensure that CDEIOs have a seat at the strategy table. Comprehending and managing the subtleties of human bias, as well as mitigating equity-related risks, are often competencies in which chief DEI officers excel. If they’re invited to participate in AI strategy development, CDEIOs can bring an integrated, equity-centered perspective on the design and implementation of AI tools to their collaboration with other stakeholders like chief technology officers, chief information officers, and chief talent officers. This collaboration can help in curating necessary learning opportunities that guide their organizations in responsibly and ethically leveraging AI tools.
Empower DEI leaders to help drive AI literacy. Only about one-third of survey respondents agree or strongly agree that their organizations offer learning opportunities focused on the intersection of AI and DEI. But 49% of CDEIOs participating in our study agree or strongly agree that they are actively encouraging leaders and workforce members to boost their AI literacy. CDEIOs can help with curating necessary learning opportunities that guide their organizations in responsibly and ethically leveraging AI tools. Elevating AI literacy is a pivotal step toward emphasizing the importance of responsible and ethical AI usage, all while maintaining a focus on equity.
Engage CDEIOs as allies to help establish trust at every level. Trust depends on the system aligning with human values and addressing risks. Each AI deployment is unique, with varying training data, model design, environments, and uses that can affect trust. Issues like bias, security, and transparency can impede responsible AI use. DEI leaders can push C-suite executives to prioritize trust. In collaboration with IT, HR, legal, finance, and ethics teams, chief DEI officers can ensure that AI promotes equitable outcomes and aligns with organizational commitments.
As AI continues to evolve, focusing on equity will be crucial to using its benefits responsibly and ethically, and CDEIOs can play a key role in making sure AI tools and strategies are built with equity in mind.
Deloitte’s DEI Institute surveyed chief DEI officers or equivalent leaders in March 2024 and received 71 responses from DEI leaders from different industries and sectors who have duties in the United States and globally. The survey aimed to gather insights on how these DEI executives perceive and use artificial intelligence, and how they engage with and navigate the AI ecosystem, as well as their beliefs and perspectives on their organizations’ AI efforts.
Read more at www.deloitte.com/us/equitable-ai.