rainbow colors zig zag circles

Perspectives

Trustworthy AI™ for regulatory and legal support

What do you need to know about how your AI makes decisions?

The growing reliance on artificial intelligence (AI) is raising new risks—ones that may not be readily visible or that play out in unexpected ways. To manage these risks, organizations need to understand how AI models are created and change over time.

AI is a game changer, with some downsides

It wasn’t so long ago that human judgment was the main processor of business decisions. In went data, observations, and lessons from the past. Out came decisions like whom to lend money, what advertisements to run, and where the optimal shipping route lay.

Today, AI is making more of these decisions. But for all its benefits, AI also bears latent risk. Sometimes, risk is introduced unintentionally, such as by selecting the wrong data set for training or omitting key information from an algorithm. Other times, AI can be subject to sabotage, deliberate bias, or intentional gaps in performance.

Bringing AI risks out of hiding

AI risk can be challenging to analyze because the technology is evolving and its models are inherently complex. If incorrect or incomplete data goes undetected, an AI application can repeat or even amplify mistakes as it codifies and scales its logic. The organization may not discover the problem until something happens—say, a whistleblower comes forward, a business partner litigates, or a regulator comes knocking.

But when it comes to risk, even complex and intelligent machines can be demystified by asking a few simple questions. What does the software development and model development life cycle look like? What decisions are the developers making to encode predictive features? From examining data models to establishing context via in-person interviews and contemporaneous documentation, independent model assessment teams can shine a light on what’s happening and work with you to set up governance processes for AI-related risks.

External action on AI oversight

Open a window into AI decision-making

Take advantage of our experience with Trustworthy AI capabilities to find fair, transparent AI solutions.

Data and AI risk
Assess conceptual soundness of architectural components and internal mechanisms of models by identifying potential design flaws

Bias identification
Put next-generation tools to work identifying potential biases latent in the model training process

Bias remediation
Remediate observed biases using techniques that align predicted outcomes with regulatory and internal policies

Transparency in AI
Understand logic that describes the algorithm, enabling comparison with established policies and detection of potential tampering

Reduce your exposure to risk

A robust AI oversight and governance system can provide insights for critical decision-making. It can also help to significantly reduce your exposure to risk while limiting legal, financial, and reputational consequences.

Risk prevention
Identify areas where algorithms are misaligned with the organization’s ethics and values, negatively affect financial and strategic decision-making, or introduce unlawful bias and discrimination.

Added visibility
Monitor risks to gain transparency and insight into AI decisions while relying less on limited testing. Identify potential causes of algorithmic bias, allowing ongoing AI model enhancement.

Operational confidence
Uncover algorithmic errors that can cause quality issues and failures, potentially avoiding significant operational disruptions.

Litigation support
Deloitte has a demonstrated history of serving as expert witnesses during litigations, bringing preferred outcomes for clients.

End notes

1US Office of Management and Budget, “Guidance for Regulation of Artificial Intelligence Applications,” November 17, 2020.
2Intelligence Advanced Research Projects Activity, “Trojans in Artificial Intelligence (TrojAI).
3Institute of Electrical and Electronics Engineers, “IEEE Ethics in Action in Autonomous and Intelligent Systems.
4Riya V. Anandwala and Danielle Cassagnol, “CTA Brings Together Tech Giants, Trade Associations to Improve Efficiencies in AI and Health Care,” Consumer Technology Association, April 4, 2019.

Fullwidth SCC. Do not delete! This box/component contains JavaScript that is needed on this page. This message will not be visible when page is activated.

Did you find this useful?