Perspectives

Exploring artificial intelligence in finance

Part II: Creating an AI framework for successful applications of Generative AI

Artificial intelligence (AI) technologies are rapidly transforming today’s business models, and emerging Generative AI and advanced applications are presenting new opportunities and possibilities for AI in finance and accounting. In the second part of our series about AI in finance and accounting, we explore ways to manage emerging AI risks and how to implement a trustworthy AI framework for success.

March 13, 2024

A blog post by Beth Kaplan, Katie Glynn, Court Watson, Oz KaranMadeline Mitchell

Artificial intelligence (AI) and machine learning technologies are rapidly transforming today’s controllership business models, and this new generation of AI capabilities has the potential to play a critical role in the future of finance. In the first part of our series on this new frontier in AI, we explored the building blocks and various applications of AI and Generative AI in finance and accounting, as well as their possible implications across businesses. However, understanding what AI is—and is not—is the beginning of successfully implementing it into the finance function. To implement a meaningful AI strategy, it is critical to know the emerging risks around AI and Generative AI, as well as a possible framework for implementing AI that sets up an AI strategy for success.

As finance and accounting professionals begin to incorporate AI capabilities and enhance their processes with AI, changes driven by this evolution and rapid adoption of AI and Generative AI technologies call for reimagining governance processes, mechanisms, and operational controls. This starts with understanding emerging risks and then incorporating a trustworthy framework that can drive AI policy and a strategy for success.

Implementing AI into the finance and controllership function

Managing AI risks

There are numerous risks that are present when adopting AI and Generative AI technologies. While many more will likely emerge, some of the more common current risks include:

Privacy: Models are built on data sharing and may require specific consent for data used (confidential information, personally identifiable information) and require bespoke data handling processes.

Regulatory permissibility: Emerging and inconsistent regulation may result in the allowable use of AI in one jurisdiction being impermissible in others, or it may require additional bias testing or reporting.

Amplification of biases: The risk of amplifying biases relates to the inherent biases in the underlying data, amplified once the models are trained on the underlying data.

Safe usage: The risk of safe usage is associated with how and where large language models (LLMs) are used, such as to generate autonomous action for machinery on a factory floor.

Responsible applications: The risk of responsible applications is associated with the various use cases that will likely be contemplated, such as using LLMs for heightened automated cyberthreats.

Sovereignty: The risk of sovereignty relates to the expectation that AI models trained on specific data sets are subject to sovereignty or residency regulations and will be required to run only on data centers within that jurisdiction.

Lack of certifications: The risk of lack of certifications speaks to the risk that LLMs may be subject to future regulation as they are increasingly used for insights, advice, or expert information.

Managing these and other AI risks that are likely to emerge is possible through a framework and AI policy, but it is crucial to understand these risks so governance mechanisms can be built into an AI policy.

Building an AI framework

A strong AI risk management framework puts trust at the core of AI operations. It contemplates the AI life cycle stages, regulatory jurisdictions, adjacent programs, control frameworks, and governance cadences needed to manage AI risk and establish trust in AI capabilities for internal and external stakeholders. The first step to bringing this framework to life is implementing an enterprise AI policy, which serves as the foundation for effective, responsible, and ethical AI practices. To give an idea of a trustworthy framework, let’s look at the framework through each governance-cadenced routine that would make up an enterprise AI policy.

Examples of the AI framework within the enterprise AI policy

AI tracking and inventory

  •  Having a standardized definition of AI across the organization
  • Incorporating a risk rating or tiering methodology
  • Using a centralized inventory with recurring maintenance

Life cycle standards

  • Having defined processes and procedures across the AI life cycle (e.g., design, development, deployment)
  • Establishing tollgates with cross-functional stakeholder involvement
  • Implementing a well-defined control structure

Risk assessment and measurement

  • Designing and monitoring risk metrics
  • Categorizing risks aligned to an organizational risk taxonomy
  • Enabling meaningful reporting, including quantitative and qualitative approaches to AI risk

Regulatory and functional alignment

  • Providing the flexibility to meet different regulatory requirements across jurisdictions
  • Aligning to other risk programs such as data risk, model risk management, and privacy risk

Pillars of a trustworthy AI framework

Comprehensive AI risk management principles serve as the cornerstone of sound AI practices. Deloitte’s Trustworthy AI™ framework provides a backdrop to a sustainable, safe, and responsible AI use environment and risk management program. These are the pillars that make up our Trustworthy AI framework:

Getting started with AI implementation

Getting started with a Generative AI or broader AI implementation strategy is a complex process given the risks involved and rapid pace of change in the marketplace. However, for incorporating an AI framework, here are some considerations and leading practices to assist with implementing a Generative AI technology to help optimize finance and accounting process.

AI implementation checklist

AI strategy

  • Define and implement an overarching AI strategy and management framework
  • Don’t forget regulatory compliance: Review the current regulatory landscape and prepare for new requirements to be issued

Guidance and training

  • Create awareness on the acceptable usage of Generative AI within the organization, including approved use cases for Generative AI applications
  • Update relevant information security policies to incorporate considerations around Generative AI

Risk framework and governance

  • Design and implement controls to address Generative AI-spe cific risks based on regulations and industry standards
  • Monitor controls and assess controls’ effectiveness to initiate remediation when needed
  • Identify and manage misinformation created through Generative AI by implementing appropriate governance mechanisms

Licensing and permissions

  • Ensure that appropriate licenses and contractual obligations are in place with Generative AI vendors to ensure appropriate usage of any shared data
  • Consider leveraging enterprise versions of Generative AI tools where there are options to apply organization-specific security requirements

Monitoring

  • Monitor the use of Generative AI within the organization through appropriate existing technologies such as secure web gateways and data loss prevention solutions

To understand more about the new frontier in AI technologies, Generative AI applications, and possible opportunities for AI in finance and accounting, read Part I in our series: Exploring Generative AI in finance.

To hear our panel discussion about the new era of AI opportunities finance and accounting, listen to our webcast on demand: A new frontier: Exploring artificial intelligence in finance.

Fullwidth SCC. Do not delete! This box/component contains JavaScript that is needed on this page. This message will not be visible when page is activated.

Did you find this useful?