How to address the inherent bias in algorithmic decision-making has been saved
How to address the inherent bias in algorithmic decision-making
Discover algorithm auditing
Algorithmic decision-making is amazing, but there is a catch. People create algorithms, and their biases can inadvertently influence the outcomes.
May 15, 2019
A blog post by James Guszcza, US chief data scientist, Deloitte Consulting LLP.
Based on the original article in Harvard Business Review.
Algorithmic decision-making is transforming the analytics landscape, and it is easy to see why. The potential advantages of using algorithms over an entire team of analysts are phenomenal:
- Speed—Algorithms take seconds to run through data and deliver results, whereas analysts take months
- Reliability—Algorithms are automated to pick up every data point, whereas analysts are prone to human error
- Cost—Once set up, algorithms cost nearly nothing to run, whereas analysts cost thousands of dollars
Algorithmic decision-making is amazing, but there is a catch. People create algorithms, and people have biases that can unduly influence the outcome.
A small detail that seems insignificant to an analyst who writes an algorithm could have a vast effect on the results generated by the equation. That effect then cascades down and grows exponentially as businesses and people make real-world decisions based on the algorithm’s results.
Consider this hypothetical scenario: An analyst did not consider the time of year a factor when creating an equation for mortgage loan approvals, but a deeper dive into the data indicates that mortgages entered into in November are more likely to default. Based on the analyst’s algorithm, a bank approves thousands of mortgage applications submitted in November, many of which could default and have a potentially devastating impact on the bank.
The biases that come into play in algorithm development are probably smaller, but like this example, their effect grows exponentially when businesses and people make decisions based on the algorithm’s results. There must be a way to address the inherent values and bias that affect algorithm development.
Can algorithms be audited?
In the article, “Why We Need to Audit Algorithms,” the authors suggest that independent auditors may be able to translate their skills at finding bias in financial records into recognizing bias in algorithms.1
Independent auditors are laser focused on searching for bias. Publicly traded companies in the United States hire auditors to examine their financial reports before filing them with the Securities and Exchange Commission. The auditors search every nook and dark spot in a company’s financial records to find hidden data, management bias, and over-valued assets. In fact, finding bias is central to the audit itself.
Here’s what “algorithm auditing” as a new service line might entail:
First, it should adopt a holistic perspective. Computer science and machine learning methods will be necessary, but likely not sufficient foundations for an algorithm auditing discipline. Strategic thinking, contextually informed professional judgment, communication, and the scientific method are also required.
As a result, algorithm auditing must be interdisciplinary to succeed. It should integrate professional skepticism with social science methodology and concepts from such fields as psychology, behavioral economics, human-centered design, and ethics. A social scientist asks not only, "How do I optimally model and use the patterns in this data?" but further asks, "Is this sample of data suitably representative of the underlying reality?" An ethicist might go further to ask a question such as: "Is the distribution based on today’s reality the appropriate one to use?"2
In the end, algorithmic decision-making is here to stay, so it is essential to find a way to address its shortcomings before they have serious implications for people, companies, and society as a whole.
A webinar series on data science, artificial intelligence, and design thinking
Catch up on the latest