violet-lens-spheres

Perspectives

Controlling AI

Using AI in finance and controllership

The current trajectory of Generative AI (GenAI) and its expanding implications across finance highlights opportunities and risks as GenAI starts to affect the finance and accounting landscape. Utilizing a strong AI risk management framework for these implementations can help organizations to incorporate a trustworthy AI into a broader vision of the future of controls.

February 19, 2025

A blog post by Katie Glynn, Casey Kacirek, and Jennifer Gerasimov

One can hardly have a conversation in business without mentioning artificial intelligence (AI) in the current environment. The buzz is everywhere. But for businesses, how are companies incorporating AI in their finance functions and controls systems? If they are, how are they managing or getting comfortable with it? While the AI train is gaining steam, we can explore some current marketplace trends around AI and Generative AI (GenAI) and possible implementations that may create effective processes and efficiencies in the finance and accounting landscape. These implementations, backed by a strong AI risk management framework, may provide organizations with tools to incorporate AI into a broader vision of the future of controls.

AI and GenAI: The next generation of AI is here

The collective pace of innovation, business adoption, and economic impact signals a comparative “smartphone moment” for AI. From early AI models around language understanding and image recognition, the evolution to GenAI is a flash point. While this is a new era for the collective world, it is also signaling to organizations and boards a critical need to understand the implications and impact of AI as well as the necessity to manage risk, governance, and trust implications for AI within the enterprise.

In this new era of AI, foundation models are what set GenAI apart from previous generations of deep learning models. What does this mean? These foundation models or large language models are pre-trained on vast amounts of data and computation to perform what is called a prediction task. These predictions can then predict or generate a broad range of new tasks. For GenAI, this translates to tools that create original content modalities (e.g., text, images, audio, code, voice, video) that would have previously taken human skill and expertise to create.

It may sound complex, but exploring how this translates into possible applications and their relative impact on professionals and organizations, including how AI can augment the work and workforce, can help simplify how GenAI is defined.

Possible AI use cases for finance and accounting

Possible AI use cases for finance and accounting

${column1-large-text}

Controllership can systematize recurring entries and reconciliations, perform a source-to-target chart of account mapping, review and analyze contract terms, and prepare internal and external financial reporting that includes commentary and insights.

${column2-large-text}

Strategic finance can assess corporate development deals, run due diligence, identify opportunities for capital improvement, and perform risk assessments and advanced scenario modeling.

${column3-large-text}

Internal audit can proactively detect and prevent fraudulent activities, analyze data and generate audit reports, and determine compliance with regulations and internal policies.

${column4-large-text}

Financial planning and analysis can predict income statements, balance sheets, and cash flow; automate the creation of data visuals and presentations; provide quick reporting and commentary; and perform quality checks.

These opportunities to augment work and the workforce with GenAI models present a plethora of possible AI use cases for the enterprise, but they also carry risks and trust implications that organizations will need to learn how to manage.

AI risk management through a trustworthy AI framework

A strong AI risk management framework puts trust at the core of AI operations. It contemplates the AI life cycle stages, regulatory jurisdictions, adjacent programs, control frameworks, and governance cadences needed to manage AI risk and establish trust in AI capabilities for internal and external stakeholders. The first step to bringing this framework to life is implementing an enterprise AI policy, which can offer a foundation for effective, responsible, and ethical AI practices. From there, to build a framework, we can look at examples within each governance-cadenced routine that would make up an enterprise AI policy.

Examples of an AI framework within the enterprise AI policy

${column1-large-text}

AI tracking and
inventory

  • Having a standardized definition of AI across the organization
  • Incorporating a risk rating or tiering methodology
  • Using a centralized inventory with recurring maintenance

${column2-large-text}

Life cycle
standards

  • Having defined processes and procedures across the AI life cycle (e.g., design, development, deployment)
  • Establishing tollgates with cross-functional stakeholder involvement
  • Implementing a well-defined control structure

${column3-large-text}

Risk assessment and measurement

  • Designing and monitoring risk metrics
  • Categorizing risks aligned to an organizational risk taxonomy
  • Enabling meaningful reporting, including quantitative and qualitative approaches to AI risk

${column4-large-text}

Regulatory and functional alignment

  • Providing the flexibility to meet different regulatory requirements across jurisdictions
  • Aligning to other risk programs such as data risk, model risk management, and privacy risk

A risk-based AI governance framework and enterprise AI policy serve as the foundation for effective, responsible, and ethical AI practices within an organization. What does a risk-based framework mean? It encompasses broad AI risk management principles, a detailed risk and controls matrix, regulatory alignment, governance and defined roles and processes, and risk-rated AI inventory and reporting.

AI risk management in action

Using a couple examples, we can explore what AI risk management may look like in action through a finance and controllership lens.

Example: Assisting the analyst with autonomous insights

With information for an analyst prepared by AI, what would be the opportunities and possible risks? The information and content produced through AI may include footnotes, accounting memos, narratives supporting valuation reports, budgets and forecasts, meeting minutes summaries, control deficiency assessments, and financial scenario modeling or forecasting.

What are the risks?
If the AI solution makes incorrect decisions or develops erroneous patterns due to deficiencies in data (e.g., inaccurate, obsolete, incomplete, unsuitable), incorrect or incomplete model assumptions, unauthorized data modification, or the lack of verification and validation (V&V) checks, then the output cannot be relied upon, which may affect key business processes the solution is supporting.

How can it be controlled?
To check the AI output for accuracy, establish a human-in-the-loop validation process for impactful decisions or actions, and document those decisions or processes. Determine a person responsible for this and ensure they are competent for this role.

Example: Autonomous accountant

With input of historical general ledger (GL) data, training data that includes closing activities, system interface errors, and journal entry (JE) posting errors can produce auto-close activities output. This includes a re-run interface processing, JE posting errors, JE corrections, and auto-posting. With this input-to-output complexity, there is inherently more risk in the process.

What are the risks?
If training data does not reflect real-world inputs as well as possible edge cases that the AI tool may encounter, then model accuracy can be reduced, and it may have adverse impacts on the business process the AI solution is supporting. In addition, if there is a lack of interpretability in the AI model, then it could lead to a misuse of the technology or an inability to meet business objectives.

How can it be controlled?
Perform extensive testing of the solution using training data that reflects real-world inputs. Document testing results and remediate identified issues. A sample of the AI system’s decisions, or output, should also be reviewed regularly for readability with adjustments made to the model as needed.

There is no doubt that GenAI is on an upward and some may say an unstoppable trajectory across the marketplace, and this includes finance and accounting. Before jumping onto that high-speed train with eyes closed, organizations should make sure they have a strong risk and controls framework in place to better facilitate trustworthy AI.

To explore additional insights into the current state of AI and where it intersects with Deloitte’s vision for the future of controls and finance, listen to our Dbriefs webcast on demand: Controlling AI: Implementing AI into finance and controls

Did you find this useful?