Insert Custom HTML fragment. Do not delete! This box/component contains code that is needed on this page. This message will not be visible when page is activated.
+++ DO NOT USE THIS FRAGMENT WITHOUT EXPLICIT APPROVAL FROM THE CREATIVE STUDIO DEVELOPMENT TEAM +++

This section is an infogram

This message and the space it occupies will not be displayed when viewing this page either in Live, Preview, or "View as published" modes

NIST AI RMF and Deloitte's Trustworthy AI Framework™

How organizations can enhance AI trustworthiness

Download the full report
NIST AI RMF and Deloitte's Trustworthy AI Framework™

This section is an infogram

This message and the space it occupies will not be displayed when viewing this page either in Live, Preview, or "View as published" modes

This section is an infogram

This message and the space it occupies will not be displayed when viewing this page either in Live, Preview, or "View as published" modes

How can organizations get started?

To effectively leverage the NIST AI RMF, organizations should begin by assessing their current AI capabilities and strategy, as well as how it intersects with broader ERM efforts. The framework is intended to be flexible, helping enable organizations to align practices with applicable laws, regulations and norms which may differ by industry or sector.

Once an effective baseline is established, organizations can start to apply framework insights on measuring risks, risk tolerance, risk prioritization and integration of risk management concepts related to AI.

As organizations’ AI capabilities mature, the NIST framework and its core functions should be revisited and supporting risk management capabilities should continue to be iterated upon to strengthen trustworthy AI.

Insert Custom HTML fragment. Do not delete! This box/component contains code that is needed on this page. This message will not be visible when page is activated.
+++ DO NOT USE THIS FRAGMENT WITHOUT EXPLICIT APPROVAL FROM THE CREATIVE STUDIO DEVELOPMENT TEAM +++

This section is an infogram

This message and the space it occupies will not be displayed when viewing this page either in Live, Preview, or "View as published" modes

How the essential characteristics of the NIST AI RMF align with Deloitte’s Trustworthy AI FrameworkTM

As AI and other advanced automated systems are becoming increasingly common tools used by organizations, Deloitte recognized the need to approach these evolving technologies in an ethical and responsible manner. As pictured in figure 4, Deloitte’s Trustworthy AI Framework has empowered organizations to build trustworthy AI and helped prepare them for the growing regulatory focus on AI and other automated systems.

The NIST AI RMF outlines seven characteristics for achieving responsible use of AI systems and effectively managing AI risk: valid and reliable, accountable and transparent, safe, secure and resilient, explainable and interpretable, privacy-enhanced and fair. Each characteristic builds upon the socio-technical viewpoint NIST advocates when implementing and managing AI technologies, however the accountability and transparency attributes also pertain to the external processes and context surrounding the AI systems.

Insert Custom HTML fragment. Do not delete! This box/component contains code that is needed on this page. This message will not be visible when page is activated.
+++ DO NOT USE THIS FRAGMENT WITHOUT EXPLICIT APPROVAL FROM THE CREATIVE STUDIO DEVELOPMENT TEAM +++

This section is an infogram

This message and the space it occupies will not be displayed when viewing this page either in Live, Preview, or "View as published" modes

Over the past decade, Deloitte developed its Trustworthy AI Framework based on hands on experience and cross industry leading practices to help clients throughout the AI lifecycle manage AI risk. Deloitte’s Trustworthy AI Framework is comprised of six characteristics: fair and impartial, robust and reliable, privacy, safe and secure, responsible and accountable and transparent and explainable

The characteristics outlined by the NIST AI RMF align well with Deloitte’s Trustworthy AI Framework and the focus on trustworthiness can help organizations to effectively utilize a variety of automated systems while feeling confident in the security and performance of their AI models.

Insert Custom HTML fragment. Do not delete! This box/component contains code that is needed on this page. This message will not be visible when page is activated.
+++ DO NOT USE THIS FRAGMENT WITHOUT EXPLICIT APPROVAL FROM THE CREATIVE STUDIO DEVELOPMENT TEAM +++

This section is an infogram

This message and the space it occupies will not be displayed when viewing this page either in Live, Preview, or "View as published" modes

This section is an infogram

This message and the space it occupies will not be displayed when viewing this page either in Live, Preview, or "View as published" modes

This section is an infogram

This message and the space it occupies will not be displayed when viewing this page either in Live, Preview, or "View as published" modes

Insert Custom HTML fragment. Do not delete! This box/component contains code that is needed on this page. This message will not be visible when page is activated.
+++ DO NOT USE THIS FRAGMENT WITHOUT EXPLICIT APPROVAL FROM THE CREATIVE STUDIO DEVELOPMENT TEAM +++

Endnotes:

  • National Institute of Standards and Technology AI Risk Management Framework
  • The AI Bill of Rights follows the Executive Order 13960: Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government (December 2020)
  • Executive Order 13859: Maintaining American Leadership in Artificial Intelligence (February 2019)
  • Office of Management and Budget (OMB) Memorandum M-21-06: Guidance for Regulation of Artificial Intelligence Applications (November 2020), White House Office of Science and Technology Policy (OSTP): American AI Initiative: Year One Annual Report (February 2020)
  • International initiatives include the Organisation for Economic Co-operation and Development (OECD): 2019 Recommendations on Artificial Intelligence, and the European Union Artificial Intelligence Act proposal (April 2021).

Insert Custom HTML fragment. Do not delete! This box/component contains code that is needed on this page. This message will not be visible when page is activated.
+++ DO NOT USE THIS FRAGMENT WITHOUT EXPLICIT APPROVAL FROM THE CREATIVE STUDIO DEVELOPMENT TEAM +++

Get in touch

Oz Karan
Risk & Financial Advisory Trustworthy AI Leader
Partner
Deloitte & Touche LLP

Beena Ammanath
US Trustworthy and Ethical AI Leader
Global Deloitte AI Institute
Deloitte Consulting LLP

Ed Bowen
Advisory AI CoE Leader
Managing Director
Deloitte Consulting LLP
AI Center of Excellence

Insert Custom HTML fragment. Do not delete! This box/component contains code that is needed on this page. This message will not be visible when page is activated.
+++ DO NOT USE THIS FRAGMENT WITHOUT EXPLICIT APPROVAL FROM THE CREATIVE STUDIO DEVELOPMENT TEAM +++

Let's talk

 
 
 
 
 
 
  Yes         No

Fullwidth SCC. Do not delete! This box/component contains JavaScript that is needed on this page. This message will not be visible when page is activated.

Insert Custom HTML fragment. Do not delete! This box/component contains code that is needed on this page. This message will not be visible when page is activated.
+++ DO NOT USE THIS FRAGMENT WITHOUT EXPLICIT APPROVAL FROM THE CREATIVE STUDIO DEVELOPMENT TEAM +++

Insert Custom HTML fragment. Do not delete! This box/component contains code that is needed on this page. This message will not be visible when page is activated.
+++ DO NOT USE THIS FRAGMENT WITHOUT EXPLICIT APPROVAL FROM THE CREATIVE STUDIO DEVELOPMENT TEAM +++

Insert Custom CSS fragment. Do not delete! This box/component contains code needed on this page. This message will not be visible when page is activated.
+++ DO NOT USE THIS FRAGMENT WITHOUT EXPLICIT APPROVAL FROM THE CREATIVE STUDIO DEVELOPMENT TEAM +++