Controlling AI has been saved
Perspectives
Controlling AI
Using AI in finance and controllership
The current trajectory of Generative AI (GenAI) and its expanding implications across finance highlights opportunities and risks as GenAI starts to affect the finance and accounting landscape. Utilizing a strong AI risk management framework for these implementations can help organizations to incorporate a trustworthy AI into a broader vision of the future of controls.
February 19, 2025
A blog post by Katie Glynn, Casey Kacirek, and Jennifer Gerasimov
One can hardly have a conversation in business without mentioning artificial intelligence (AI) in the current environment. The buzz is everywhere. But for businesses, how are companies incorporating AI in their finance functions and controls systems? If they are, how are they managing or getting comfortable with it? While the AI train is gaining steam, we can explore some current marketplace trends around AI and Generative AI (GenAI) and possible implementations that may create effective processes and efficiencies in the finance and accounting landscape. These implementations, backed by a strong AI risk management framework, may provide organizations with tools to incorporate AI into a broader vision of the future of controls.
AI and GenAI: The next generation of AI is here
The collective pace of innovation, business adoption, and economic impact signals a comparative “smartphone moment” for AI. From early AI models around language understanding and image recognition, the evolution to GenAI is a flash point. While this is a new era for the collective world, it is also signaling to organizations and boards a critical need to understand the implications and impact of AI as well as the necessity to manage risk, governance, and trust implications for AI within the enterprise.
In this new era of AI, foundation models are what set GenAI apart from previous generations of deep learning models. What does this mean? These foundation models or large language models are pre-trained on vast amounts of data and computation to perform what is called a prediction task. These predictions can then predict or generate a broad range of new tasks. For GenAI, this translates to tools that create original content modalities (e.g., text, images, audio, code, voice, video) that would have previously taken human skill and expertise to create.
It may sound complex, but exploring how this translates into possible applications and their relative impact on professionals and organizations, including how AI can augment the work and workforce, can help simplify how GenAI is defined.
These opportunities to augment work and the workforce with GenAI models present a plethora of possible AI use cases for the enterprise, but they also carry risks and trust implications that organizations will need to learn how to manage.
A risk-based AI governance framework and enterprise AI policy serve as the foundation for effective, responsible, and ethical AI practices within an organization. What does a risk-based framework mean? It encompasses broad AI risk management principles, a detailed risk and controls matrix, regulatory alignment, governance and defined roles and processes, and risk-rated AI inventory and reporting.
AI risk management in action
Using a couple examples, we can explore what AI risk management may look like in action through a finance and controllership lens.
Example: Assisting the analyst with autonomous insights
With information for an analyst prepared by AI, what would be the opportunities and possible risks? The information and content produced through AI may include footnotes, accounting memos, narratives supporting valuation reports, budgets and forecasts, meeting minutes summaries, control deficiency assessments, and financial scenario modeling or forecasting.
What are the risks?
If the AI solution makes incorrect decisions or develops erroneous patterns due to deficiencies in data (e.g., inaccurate, obsolete, incomplete, unsuitable), incorrect or incomplete model assumptions, unauthorized data modification, or the lack of verification and validation (V&V) checks, then the output cannot be relied upon, which may affect key business processes the solution is supporting.
How can it be controlled?
To check the AI output for accuracy, establish a human-in-the-loop validation process for impactful decisions or actions, and document those decisions or processes. Determine a person responsible for this and ensure they are competent for this role.
Example: Autonomous accountant
With input of historical general ledger (GL) data, training data that includes closing activities, system interface errors, and journal entry (JE) posting errors can produce auto-close activities output. This includes a re-run interface processing, JE posting errors, JE corrections, and auto-posting. With this input-to-output complexity, there is inherently more risk in the process.
What are the risks?
If training data does not reflect real-world inputs as well as possible edge cases that the AI tool may encounter, then model accuracy can be reduced, and it may have adverse impacts on the business process the AI solution is supporting. In addition, if there is a lack of interpretability in the AI model, then it could lead to a misuse of the technology or an inability to meet business objectives.
How can it be controlled?
Perform extensive testing of the solution using training data that reflects real-world inputs. Document testing results and remediate identified issues. A sample of the AI system’s decisions, or output, should also be reviewed regularly for readability with adjustments made to the model as needed.
There is no doubt that GenAI is on an upward and some may say an unstoppable trajectory across the marketplace, and this includes finance and accounting. Before jumping onto that high-speed train with eyes closed, organizations should make sure they have a strong risk and controls framework in place to better facilitate trustworthy AI.
To explore additional insights into the current state of AI and where it intersects with Deloitte’s vision for the future of controls and finance, listen to our Dbriefs webcast on demand: Controlling AI: Implementing AI into finance and controls
Recommendations
NextGen Controllership: Harnessing AI and emerging technologies to transform finance and accounting
Deloitte’s Center for Controllership and IMA report insights from our global survey
Exploring artificial intelligence in finance
Part I: Opportunities for finance and controllership in the new Generative AI frontier