Organizations Will Use AI to Reduce Bias | Deloitte US has been saved
Limited functionality available
Bias is everywhere. Nearly two-thirds of respondents in Deloitte’s 2019 report on the state of inclusion reported experiencing bias in the workplace last year. And the sobering statistics continue from there: respondents reported that bias had negative impacts on productivity (68 percent), engagement (70 percent), and on happiness, confidence, and wellbeing (84 percent).1
As humans, we can hold a variety of unconscious biases. Many are necessary to daily life, almost intuitive; some are less productive and are holdovers from the past, no longer relevant. We tend to favor people most like ourselves (similarity bias). We often prefer information that confirms our beliefs and are prone to discount information that contradicts them (confirmation bias). We also can put greater emphasis on things that have just happened (recency bias). These and other types of biases can unconsciously influence our decision-making: we may inadvertently hire or promote those most like us, make talent selections that align with our preconceived notions, and base our performance evaluations on what we expect to see or have seen most recently.
Organizations are increasingly recognizing that humans are biologically hardwired to operate on instinct and habit and are seeking nonhuman solutions to mitigate outmoded and problematic biases. For instance, the use of artificial intelligence (AI) in recruitment alone is expected to increase threefold over the next two years.2
AI is not new, but it has been making increasingly interesting strides into talent acquisition, internal mobility, learning and development, and performance management. Some common use cases of AI include:
However, AI is not without its own challenges. The algorithms that drive AI (including the parameters for machine learning applications) are created by humans—and humans have unconscious biases. Until we reach the technology singularity, at which point AI will program itself (we’ll save that prediction for a future year), this means that AI is also subject to bias.
For example, if your company is currently made up of mostly Caucasian males over 40 years of age, and the talent acquisition AI tool is establishing correlations using only this data set to bring in more high performers, then it should be no surprise that the result will be more Caucasian males over 40. Clearly, a more thoughtful approach to “programming” the AI is required in order to identify and bring on a more diverse talent pool.
Many organizations are aware of AI’s flaws and are taking steps to address them. For example, several leading technology companies have announced their use of open-source software tools that can be used to examine bias and fairness in AI models.3 Furthermore, there is a growing number of AI auditing firms emerging to help address these issues.
AI can provide humans with powerful tools to reduce unconscious bias, but in turn, humans need to design AI with fairness standards in mind and routinely monitor and test algorithms to ensure they do not favor or disadvantage any particular group. In this way we can use human judgment, aided by AI, to reduce both our unconscious biases and inadvertent machine-learning biases.
Of course, even when work is augmented by AI, many decisions will still fall to humans—who are prone to cognitive shortcuts. But we can take this another step forward: Behavioral science can help create environments and offer choices that encourage better decision-making.
For example, a hiring manager or recruiter may show similarity bias in reviewing a resume. A resume-masking AI tool could be used to anonymize demographic details in order to reduce bias and nudge the resume reviewer to focus on the most critical job-related aspects. The intent is not to rely on biased shortcuts or “trick” people into one decision or another but rather to nudge them to consider the most pertinent factors.
To get started:
The combination of AI and behavioral science will be on the rise in 2020. An increased number of AI tools will continue to emerge, and organizations will become more familiar with behavioral science tools and nudges to help their people make better and more informed talent decisions.
Zachary Toof is a research manager, People Analytics, at BersinTM, Deloitte Consulting LLP.
Nehal Nangia is a research manager, Talent and Workforce Performance, at BersinTM, Deloitte Consulting LLP.
Janet Clarey is lead advisor, Technology, Analytics & Diversity & Inclusion, at BersinTM, Deloitte Consulting LLP.
1The bias barrier: Allyships, inclusion, and everyday behaviors, Deloitte Development LLP, 2019, https://www2.deloitte.com/content/dam/Deloitte/us/Documents/about-deloitte/us-inclusion-survey-research-the-bias-barrier.pdf.
2The 2019 State of Artificial Intelligence in Talent Acquisition, HR Research Institute, 2019, https://www.oracle.com/a/ocom/docs/artificial-intelligence-in-talent-acquisition.pdf?elqTrackId=1279a8827f3d4548ae3f966beeeef458&elqaid=83148&elqat=2.
3“Artificial Intelligence Can Reinforce Bias, Cloud Giants Announce Tools For AI Fairness,” Forbes.com / Paul Teich, September 24, 2018, https://www.forbes.com/sites/paulteich/2018/09/24/artificial-intelligence-can-reinforce-bias-cloud-giants-announce-tools-for-ai-fairness/#332c72fd9d21.