trusted

Article

Trusted AI, trusted government

Developing and deploying trustworthy AI in government

Artificial intelligence (AI) offers government tremendous opportunities to improve operations and the lives of citizens, yet to ensure that such a powerful tool stays true to our collective values requires significant and continuous effort.

Government and AI

Artificial intelligence and machine learning are powerful tools with the potential to revolutionize how the government delivers services to citizens. For example, AI can help cut traffic by 25%, but like any powerful tool, AI comes with risks. To help address and mitigate those impacts, organizations need to develop controls and mechanisms to manage AI technology, enabling it to advance society while functioning in a trustworthy, equitable and ethical manner. AI algorithms do not have ethics in themselves; people have ethics. If government organizations want people to trust that the outcomes of their AI tools are ethical, they need to build principles into the development of the tools themselves.

Developing and deploying trustworthy AI in government

To build trust in AI, make Trustworthy AI™

How can we further safeguard AI so we can realize its benefits? The starting point is to shift the discussion from why people mistrust AI when it fails to understanding why they can trust AI when it works. In short, AI needs to earn people’s trust.

Trust is fragile—earned over time and lost in an instant. Because of this, a consistent framework to ethical issues can make sure an organization doesn’t overlook any key factors that underpin trust in AI. Deloitte’s Trustworthy AI framework lays out six key dimensions to build trust in AI. The framework is designed to help agencies identify and mitigate potential risks related to AI ethics at every stage of the AI development life cycle.

The dimensions of Trustworthy AI are:

  • Fair and impartial: AI must be designed and trained to follow a fair and consistent process that takes the bias out of the decisions.
  • Transparent and explainable: Agencies should emphasize creation of algorithms that are transparent and can be explained to people who are being impacted by those algorithms.
  • Responsible and accountable: AI systems need policies about who is responsible and accountable for their output or decision making.
  • Safe and secure: For AI to be trustworthy, it must be protected from cybersecurity risks that could manipulate the models and result in digital or physical harm.
  • Privacy: Privacy is critical for AI since the sophisticated insights generated by AI systems often stem from data that is more detailed and personal.
  • Robust and reliable: AI needs to be at least as robust and reliable as the traditional systems, processes, or people it is augmenting or replacing.

Putting principles into practice: Operationalize and institutionalize ethical AI

Formulating principles or guidelines is helpful, but not sufficient. To bring real value to citizens, AI needs to operate at scale without reliance on manual reviews or interested individuals to stay trustworthy. Government leaders should consider starting now to prepare their organizations to build and encourage trustworthy AI. Government should institutionalize governance structures, embed ethical AI into organizational culture, and monitor the performance of AI using new tools and resources.

Building trustworthy AI is a complex yet worthwhile task. Agencies that invest in building trust in AI can maximize the benefits of AI to achieve mission outcomes, improve human experience, and deliver efficient services.

Did you find this useful?