Article

Building trust in AI: How to overcome risk and operationalize AI governance

This report helps guide organizations on their AI adoption journey, by outlining the strategies to operationalize AI governance across the AI lifecycle. To be effective, organizations must strike a balance between accelerating the use of AI and having the right AI governance in place to ensure trustworthiness.

There are five critical strategies:

Identifying risk

Organizations must identify potential AI system risks early in the AI life cycle. This will allow AI system owners and developers to make the right design, development, and deployment decisions to build trust. Doing so will also limit the amount of redevelopment work. A common pitfall when risks are assessed and mitigated at the completion of the development or validation (quality-assurance) stage is necessary rework, or even losing time and financial investment if the project cannot be amended and therefore cannot be implemented.

Identifying risk

Scaling governance to the AI system

Addressing AI risks involves a complex ecosystem of stakeholders who provide guidance and mitigation strategies, based on their functional expertise. Privacy SMEs, legal and compliance SMEs, security SMEs, senior data scientists, ethicists, and cross-functional groups of business leaders each play a role in addressing AI risks. An AI system owner should be aware of which stakeholder groups need to be consulted in order to be successful, and by extension, which stakeholder groups may not be necessary given the attributes of an AI system. This demands a more nuanced approach than standard risk tiers, which can be facilitated by understanding the key parameters and attributes of an AI system.

Scaling governance to the AI system

Implementing self-assessments

Organizations must rely heavily on AI system owners and developers to identify risk and scale governance. To extract the right amount of information, Deloitte has developed a Trusted AI self-assessment. Among its features, the tool collects key parameters of the AI system to gauge which AI risks require further attention and analysis, and then:

  • Deliver actionable guidance to AI system owners and developers to inform their design and development decisions
  • Triage AI system owners toward the SMEs (groups/functions) they must engage with to manage these risks

The tool also supports AI system inventory and aggregate reporting, one key to an organization’s understating of its AI adoption, ROI, aggregate risk profile, and other relevant insights.

Implementing self-assessments

Adopting technical playbooks

Technical playbooks provide developers building AI systems with tactical, situation-specific guidance. Their scope focuses on the types of AI systems most common to the organization. Playbooks demonstrate how certain techniques are applied, and contain references to open source or procured tools and resources. They also contain techniques that have been tested and are expected to have an extended shelf life, though they still need to be revisited periodically to update the techniques, examples, and references.

Fairness and explainability are risks well-suited to technical playbooks because their mitigation is applied at the individual AI system level and often require technical (programmed) methods.

Adopting technical playbooks

Using trusted AI tools

After AI system owners and developers have used a self-assessment to identify inherent AI risks and make informed design decisions, and after they consulted technical playbooks to understand at a granular level what actions to take, they will require software tools to improve upon their AI systems. These tools can be open source or acquired solutions and are designed to address risk areas like fairness, explainability, and robustness. As organizations weigh the costs and benefits of building vs. buying solutions they will require a comprehensive understanding of the software landscape including real costs, customization, and the levels or proficiency required to effectively leverage the tools.

Using trusted AI tools
Did you find this useful?