Teaming AI with Humans

Perspectives

How teaming artificial intelligence with humans can help de-bias decision-making

Neither algorithmic nor human decision-making is perfect, but when they are paired they can be less biased

By Tasha Austin, Joe Mariani, Devon Dickau, Pankaj Kamleshkumar Kishnani, and Thirumalai Kannan

Artificial intelligence (AI) has incredible power to find patterns in large amounts of data to help identify conclusions that human decision-makers may not be able to identify. Tapping into that power, governments have used AI to help allocate grants, prioritize health and fire inspections, detect fraud, prevent crime, and personalize services. However, AI may have programmed biases that systematically produce outcomes that may be considered to be unfair to one person or group. The challenges of biased AI algorithms are becoming well known. From potential flawed facial recognition to potential biased bail decisions, having an overreliance on AI may create significant challenges for government organizations.

But hidden in those potential biases may be a path forward to even more equitable government where all people have the opportunity to thrive. The limitations of AI and human decision-making are the inverse of each other. Where humans struggle with large volumes of data, precision, and consistency, AI can excel. Similarly, AI may struggle in adapting to context or understanding human values, things many humans do naturally, almost without thought.

Pairing human knowledge and experiences with AI capability may allow governments the ability to tackle complex decisions, with greater confidence in the accuracy—and equity—of its conclusions. AI may help augment human capabilities by analyzing voluminous datasets and providing the ability to identify unconscious inconsistencies or potential biases in human judgments.

AI and human judgment may be perfect partners for each other. Human judgment may be wise and sensitive to context, but it has limitations. AI is very powerful but will only go where its programming directs it. What one lacks, the other may provide.

Agencies can take six steps to help confirm responsible development of AI and limit the implementation of algorithms that may result in individual and societal biases.

  1. Review underlying training data.
  2. Adopt data trails through data standards.
  3. Build models with intention.
  4. Deploy “red teams” and community jury practices to detect potential bias in AI.
  5. Develop independent governance structure.
  6. Operationalize ethical AI guidelines and principles.

AI has the potential to make government services more equitable. However, agencies should confirm that potential biases of the analog era are not encoded in AI. Diversity of data, talent, and governance may go a long way in confirming that AI models augment, not replace, human judgment and help to create a more inclusive future.

Teaming AI with Humans

Get in touch

Tasha Austin
Advisory Principal
Deloitte & Touche LLP
laustin@deloitte.com
Joe Mariani
Senior Research Manager
Deloitte Services LP
jmariani@deloitte.com
Devon Dickau
Senior Manager
DEI Consulting Services Leader
ddickau@deloitte.com
Pankaj Kamleshkumar Kishnani
Researcher
Deloitte Services LP
pkamleshkumarkish@deloitte.com
Thirumalai Kannan
Researcher
Deloitte Services LP
tkannand@deloitte.com

Fullwidth SCC. Do not delete! This box/component contains JavaScript that is needed on this page. This message will not be visible when page is activated.

Did you find this useful?