Solutions

Trustworthy AI

Artificial intelligence (AI) has become one of the key technologies of this century and plays an increasingly essential role in answering the challenges we face. AI will impact our daily lives across all sectors of the economy. However, to achieve the promise of AI, we must be ready to trust in its results. We need AI models that satisfy a number of criteria and thus earn our trust.

Artificial Intelligence (AI) has long fascinated both computer scientists and the public since the term was coined in the 1950s. Since then, the sensationalist scaremongering about runaway AIs gradually gave way to a grounded, realistic view: AI is a sophisticated technology – or set of technologies – with the potential to deliver significant economic, scientific and societal advantages. It is an immensely powerful tool with wide-ranging potential. Over the next 10 years, experts expect an incremental economic impact of AI worldwide between $12 and $16 billion. 

Implemented properly, AI enables us to become leaner & faster, smarter, more personalized. With AI, we can examine and learn from data at a speed and scale that took our predecessors generations.  Proper implementation is not automatic – it requires skills, experience and discipline. Open source toolkits have effectively “democratized” software development and led to a rapid proliferation in AI-based tools – from experts and debutants alike. This dynamic introduces both opportunities and risks. For example, AI models can be easily re-trained on new data sets, keeping them relevant and up-to-date.  

On the flipside: model quality varies widely, use-cases can be questionable... and the AI models themselves cannot be held accountable for erroneous outcomes. These realities present several governance issues, recognized by researchers, practitioners, business leaders, and by regulators. The regulation of AI as proposed by the European Commission (see inset text) recognizes these risks. It addresses the need for data quality, transparency, fairness, safety, robustness - and above all ethics in application of AI.  Where the regulation focuses on “what”, our aim is to guide you on “how.” 

Proposed "Artificial Intelligence Act"

  • Adopts a broad definition of AI
  • Focuses on use-cases (vs the technology itself), defending fundamental rights
  • Categorizes specific applications as forbidden or high-risk (Annex III)
  • Establishes quality standards & disclosure of high-risk AI applications
  • Defines requirements for assessment and ongoing assurance of conformity
  • Envisions regulatory control mechanisms
  • Quantifies penalties for breaches by type

AI is anything but objective

A common difference between AI and "classic" deterministic approaches is that AI learns from data rather than from a set of rules. However, it is a common misperception that having roots in data bestow objectivity on an AI model. In reality, AI is only as objective as its developers design it to be.  

Computer vision tasks conveniently illustrate the point. An algorithm is trained on a set of image data that are labeled for concepts, such as "stop sign" – by humans. The Deep Neural Network (DNN) classifies each image, breaks it down into characteristics (e.g. edges, colors, and shapes) and associates the result with the label. The DNN can do this very effectively. Yet even the best architectures fail if misled by training data, such as “stop sign” images given the “no entry” label. The resulting DNN will not be able to distinguish stop signs, either consistently or erratically assigning them to the “no entry” category. The AI is only as good as the human trained it, entirely dependent on the selection of data, its completeness, and consistently correct annotation. 

We live in the age of the so-called "narrow" AI, for example: AI which can identify objects or predict the next word in a sentence. These AIs are trained & pruned to excel at one task alone. The data they are fed are curated by their inventors. If trained to recognize cats, they will not recognize penguins as such (rather, that they are “not cats”). “General AI” remains a long way off. 

The quality of an AI depends on numerous design decisions.

Beyond the perils of inaccurate or inconsistent annotation of training data, narrow AI also depends on countless design decisions, many of which can have profound impact on the functioning of an AI. An AI model may be trained on and applied to data, however:

  • which data?
  • with which objective function?
  • utilizing which approach?
  • which architecture?
  • which tunings and tweaking?

Implemented improperly, an AI model can systematically discriminate against what it does not know (what was absent from the training data), inadvertently perpetuating historical bias. It may succeed in classifying images for the wrong reasons (the background of the image vs the subject), a defect which cannot be caught without sufficient transparency. It may be unstable, making a prediction one way, then another, despite similar inputs. These risks are not new: they have accompanied model designers long before the “age of AI”. However, the mathematical nature of AI renders AI models in many ways more susceptible to these risks, or at least in different ways than in the past.  

To achieve the promise of AI, stakeholders must be ready to trust in their outputs.

Many of our customers already make use of AI, yet concerns or doubt about reliability remain. To resolve this challenge, Deloitte interviewed data scientists, computer scientists, mathematicians, as well as risk, ethics and economic experts worldwide. The result:  a "Trustworthy AI Framework", encapsulating the six key criteria that AI must fulfill to gain our trust: 

Trustworthy AI™

Artificial intelligence (AI) has become a key digital technology and an essential part of the answers to many of the challenges we are facing. AI will impact our everyday lives as well as all sectors of the economy. But to achieve the promise of AI, we must be ready to trust in its outputs. What we need are trustworthy AI models that satisfy a set of general criteria as described in this article.

These criteria will sound familiar. We trust a brand that delivers the functionality and quality it promises, does so reliably and without unpleasant surprises. To gain confidence in a food producer, the consumer demands quality (no health risks), responsibility (no child labor, sustainable use of resources) and honesty (no misinformation). Similarly, AI should offer quality (consistent accuracy), responsibility (ethical implementation) and honesty (transparency). AI may affect our lives in a different way than food, but in both cases, consumers and users have human expectations and fears. 

Trust must be instilled in the core of AI, not only on its surface. To ensure that trust is "built-in," we must operationalize trustworthy characteristics into the processes that give rise to AI products and services. This includes not only the essential steps in AI development, but also in the subsequent process environment: 

  • Largely defined by the culture, strategy, controls mindset, as well as the product management and technical skills of the developer’s organization
  • Also defined by external factors: regulatory environment, societal norms and values

Relevant for the entire life cycle

One approach to Trustworthy AI could be to leave it to the auditors. "Launch and learn", "Fail fast". Agile processes are highly effective in achieving concrete results quickly. However, we should be careful not to use the trendy terms in the wrong context. What works perfectly well for prototype development or A/B testing is likely not sufficient for solid and reliable implementation into a production environment.  

As we know it from the manufacturing process: the quality costs increase exponentially the later an error has been identified. This is no different with AI.  Failed AI models can not only incur economic costs, but also reputational damage. This can affect all developers – even the tech giants who owe some of their spectacular growth to AI.  Yet true disaster cases have so far remained thankfully few. We attribute this to two dynamics:

  • The tech giants are investing heavily in Trustworthy AI
  • Other companies have little exposure as yet, only now emerging from the proof-of-concept (PoC) phase, when AI models have been shielded from the dangers of the outside world

Tight deadlines, limited budgets and other pressures increase the risk of errors.  Clear priorities and a rigorous approach are needed to ensure trustworthy by design: from conceptualization to prototyping, integration to testing, and ultimately monitoring and general governance.

Strategize

Set the stage, business objectives, values… the measuring stick against which teams, products, services, processes, tools will be judged. Ensure proper governance & control infrastructure. Capture use case ideas that align with the strategic objectives and core values.

Build

Define targets, delineate scope, identify constraints, assess feasibility and risk. Join developers with stakeholders, determine basic architecture, functionalities, refine data requirements, respect explainability, fairness, privacy considerations.

Integrate

With concepts proven, minimal viable products built, the solution is put into production. This brings with it new considerations – uptime reliability, load balancing & scalability, data interfaces and compatibility with other systems in the ecosystem, defenses against cyber attack. Everything must be high performance and sufficiently resilient for operational stresses & scenarios.

Assure

Verify & validate along performance criteria – accuracy, fairness, transparency, others. Test reliability & reproducibility, challenge design decisions throughout. Periodically re-train to uphold performance expectations and guard against model drift, test controls to limit impact of potential failure modes.

Contact us

Jakub Höll

Jakub Höll

Director

Jakub leads the Operational Risk Team at the Risk Advisory department, Deloitte Czech Republic. He focuses on project management, agile and digital transformations of companies, data privacy and gover... More

Vilém Krejcar

Vilém Krejcar

Senior Consultant

Vilém works in the Financial Risk & ESG team at Strategy, Risk and Transactions (SR&T), with 4 years of professional experience, primarily with FSI clients. With a strong data-driven and analytical ba... More