Posted: 15 Oct. 2023 6 min. read

Steps to promote trustworthy AI

How organizations can navigate fast-moving AI risk and regulations

By Ryan Hittner, Audit & Assurance Principal, Deloitte & Touche LLP

Talking points
  • The remarkable potential of artificial intelligence (AI) may be tempered by growing challenges to data privacy, risk mitigation, and cybersecurity—in short, to trust and transparency.
  • How can you harness AI’s vast potential while managing its risks?
  • What steps can companies take to help develop effective AI strategies that build transparency and trust with shareholders and other stakeholders? 
The word on the street

Thanks to the recently developed large language models, or LLMs, we may be close to realizing a decades-old quest: the creation of intelligent machines. This technology, which seeks to imitate the human brain, has opened a new realm called Generative AI—software that can write coherent text and create images and computer code at a level close to that of humans.

As we ponder the vast business potential of AI, one thing is abundantly clear: AI requires sound human decision-making to curb its potential shortcomings.

So how do we harness the power of AI while managing its risks in a manner that engenders trust and not suspicion?



Putting our POV to work

Regulation and transparency are likely to be part of the answer. For example, the city council of New York City recently passed a law requiring bias audits of automated employment decision tools (AEDT) used by companies for employment decisions (e.g., select job candidates from pools of applicants). Moreover, Local Law 144-21 mandates that companies publish a summary of the audit results and that they alert job applicants or employees to the use of AEDTs as well as to the “categories” and “screens” set in evaluating qualifications and background.

Federal, state, and local governments/agencies as well as international bodies are taking a closer look at how organizations use AI. For example, officials in in several states have proposed AI legislation and regulations to enhance transparency and accountability, while the National Institute of Standards and Technology has released an AI Risk Management Framework. In Europe, the Artificial Intelligence Act seeks to provide a framework to regulate AI based on the level of risk that a system might pose and to harmonize cross-border rules dealing with this technology.

This activity suggests that it’s time for companies to focus on developing or refreshing their own AI strategies as well as usage and governance models—because there’s more to concerns over AI than just bias. It’s a matter of trust—the trust between stakeholders (e.g., shareholders, employees, etc.) and companies. 

­­AI presents new and growing challenges to data privacy, risk mitigation (including reputational risk), cybersecurity, and disruptions of operations and business models. And those may increase as the technology and use cases expand. So, what should be on a company’s AI transparency and trust agenda as it works to reimagine and manage risk mitigation, operational controls, and governance processes? 

Steps to consider
  1. Compile an inventory of the AI technologies and use cases in place for your company. An inventory is vital to assessing potential risks and facilitating discussions with other stakeholders in the organization (e.g., management, compliance, legal, etc.).
  2. Review existing laws and proposals to proactively evaluate the potential impact of current and pending legal and compliance obligations.
  3. Design and codify a mission statement and guidelines for the development and use of AI within your organization.
  4. Create and implement an AI-specific risk framework and governance model to implement pertinent controls and real-time monitoring systems.
  5. Bring employees up to speed on issues of acceptable usage by creating appropriate guidelines supported by the requisite training.
  6. Work with your legal counsel to identify compliance gaps, determine appropriate remediation, and take other necessary measures to avoid missteps.
  7. Engage with your vendors to learn about their uses of AI as well as their standards and regulations to help mitigate any acquired business or reputational risks. Update any service agreements, contracts, licensing terms, etc. as needed.
What all this talk means for you

Expectations around AI continue to grow, and companies are increasingly harnessing AI and expanding its impact. Just think, AI is already capable of writing this post with some human assistance. Next time it just might. That’s a reality worth pondering.

Please read our Wall Street Journal article to learn more, and if you have any questions, please contact me. 

Subscribe to receive The Pulse

Get in touch

Ryan Hittner

Ryan Hittner

Audit & Assurance Principal

Ryan is an Audit & Assurance principal with more than 15 years of management consulting experience, specializing in strategic advisory to global financial institutions focusing on banking and capital markets. Ryan co-leads Deloitte's Artificial Intelligence & Algorithmic practice which is dedicated to advising clients in developing and deploying responsible AI including risk frameworks, governance, and controls related to Artificial Intelligence (“AI”) and advanced algorithms. Ryan also serves as deputy leader of Deloitte's Valuation & Analytics practice, a global network of seasoned industry professionals with experience encompassing a wide range of traded financial instruments, data analytics and modeling. In his role, Ryan leads Deloitte's Omnia DNAV Derivatives technologies, which incorporate automation, machine learning, and large datasets. Ryan previously served as a leader in Deloitte’s Model Risk Management (“MRM”) practice and has extensive experience providing a wide range of model risk management services to financial services institutions, including model development, model validation, technology, and quantitative risk management. He specializes in quantitative advisory focusing on various asset class and risk domains such as AI and algorithmic risk, model risk management, liquidity risk, interest rate risk, market risk and credit risk. He serves his clients as a trusted service provider to the CEO, CFO, and CRO in solving problems related to risk management and financial risk management issues. Additionally, Ryan has worked with several of the top 10 US financial institutions leading quantitative teams that address complex risk management programs, typically involving process reengineering. Ryan also leads Deloitte’s initiatives focusing on ModelOps and cloud-based solutions, driving automation and efficiency within the model / algorithm lifecycle. Ryan received a BA in Computer Science and a BA in Mathematics & Economics from Lafayette College. Media highlights and perspectives First Bias Audit Law Starts to Set Stage for Trustworthy AI, August 11, 2023 – In this article, Ryan was interviewed by the Wall Street Journal, Risk and Compliance Journal about the New York City Law 144-21 that went into effect on July 5, 2023. Perspective on New York City local law 144-21 and preparation for bias audits, June 2023 – In this article, Ryan and other contributors share the new rules that are coming for use of AI and other algorithms for hiring and other employment decisions in New York City. Road to Next, June 13, 2023 – In the June edition, Ryan sat down with Pitchbook to discuss the current state of AI in business and the factors shaping the next wave of workforce innovation.