How teaming artificial intelligence with humans can help de-bias decision-making has been saved
Perspectives
How teaming artificial intelligence with humans can help de-bias decision-making
Neither algorithmic nor human decision-making is perfect, but when they are paired they can be less biased
By Tasha Austin, Joe Mariani, Devon Dickau, Pankaj Kamleshkumar Kishnani, and Thirumalai Kannan
Artificial intelligence (AI) has incredible power to find patterns in large amounts of data to help identify conclusions that human decision-makers may not be able to identify. Tapping into that power, governments have used AI to help allocate grants, prioritize health and fire inspections, detect fraud, prevent crime, and personalize services. However, AI may have programmed biases that systematically produce outcomes that may be considered to be unfair to one person or group. The challenges of biased AI algorithms are becoming well known. From potential flawed facial recognition to potential biased bail decisions, having an overreliance on AI may create significant challenges for government organizations.
But hidden in those potential biases may be a path forward to even more equitable government where all people have the opportunity to thrive. The limitations of AI and human decision-making are the inverse of each other. Where humans struggle with large volumes of data, precision, and consistency, AI can excel. Similarly, AI may struggle in adapting to context or understanding human values, things many humans do naturally, almost without thought.
Pairing human knowledge and experiences with AI capability may allow governments the ability to tackle complex decisions, with greater confidence in the accuracy—and equity—of its conclusions. AI may help augment human capabilities by analyzing voluminous datasets and providing the ability to identify unconscious inconsistencies or potential biases in human judgments.
AI and human judgment may be perfect partners for each other. Human judgment may be wise and sensitive to context, but it has limitations. AI is very powerful but will only go where its programming directs it. What one lacks, the other may provide.
Agencies can take six steps to help confirm responsible development of AI and limit the implementation of algorithms that may result in individual and societal biases.
- Review underlying training data.
- Adopt data trails through data standards.
- Build models with intention.
- Deploy “red teams” and community jury practices to detect potential bias in AI.
- Develop independent governance structure.
- Operationalize ethical AI guidelines and principles.
AI has the potential to make government services more equitable. However, agencies should confirm that potential biases of the analog era are not encoded in AI. Diversity of data, talent, and governance may go a long way in confirming that AI models augment, not replace, human judgment and help to create a more inclusive future.
Get in touch
Tasha Austin Advisory Principal Deloitte & Touche LLP laustin@deloitte.com |
Joe Mariani Senior Research Manager Deloitte Services LP jmariani@deloitte.com |
Devon Dickau Senior Manager DEI Consulting Services Leader ddickau@deloitte.com |
Pankaj Kamleshkumar Kishnani Researcher Deloitte Services LP pkamleshkumarkish@deloitte.com |
Thirumalai Kannan Researcher Deloitte Services LP tkannand@deloitte.com |
Recommendations
Trustworthy open data for trustworthy AI
Many government and nongovernment agencies are releasing massive amounts of open data that can be used to train AI models and unlock huge value for society. Yet, organizations need to be cognizant of the risks and ensure that open data offers a safe path to future AI.
Trustworthy AI™
Bridging the ethics gap surrounding AI