Regulating AI in the banking space: A call to action Bookmark has been added
Regulating AI in the banking space: A call to action
AI has clear benefits to the banking industry, but questions remain about how much regulators should interfere with the new practice.
July 10, 2019
A blog post by Jan-Thomas Schoeps, a research manager at the Deloitte Center for Financial Services, Deloitte Services LP
As Artificial Intelligence (AI) gains popularity in the banking sector, it is attracting attention from regulators.1 The application of AI in banking has many benefits, such as higher efficiencies from automating banks’ internal processes, accelerating innovation, and enhancing customer experience (like instant credit application decisions).2 However, it also has some drawbacks, including the potential for lack of transparency3 and bias4 in the algorithms’ decisions and potential risks to financial stability.5
As I will argue, AI, due it its unique characteristics, does not conform to traditional regulatory frameworks and needs a new approach to regulation. Not regulating AI is not an option, but overregulation of this burgeoning field may have undesirable consequences. This begs the question: How should AI be regulated in the banking sector by striking the right balance between fostering innovation and keeping risks in check? The good news is that regulators are aware of this trade-off,6 but there is a lack of clarity regarding regulation.
Arguments for AI regulation
Let’s first look at lack of transparency in AI models. Many of these models tend to be “black boxes,” with limited visibility into the rationale for the outputs.7 This is particularly true of machine-learning algorithms. Given this reality, a regulatory framework relying on transparency, as is traditionally the case, may be at odds with the nature of AI.
Next, let’s examine bias in AI decisions. The quality of an AI algorithm’s decisions not only depends on the quality of the data, but also on how a problem is framed and the leeway a model is given to achieve its goals.8 For instance, an algorithm can be tasked with credit underwriting for profit maximization, ignoring social implications. If the algorithm then discovered that discriminating against a certain group of market participants maximizes returns, its decisions may become unfair.
Third, AI may pose financial stability risks. If machine-learning algorithms collectively adjust to and follow a previously outperforming trading or decision pattern in lending decisions, herding behavior may occur.9 This has the potential to amplify market shocks or lead to a concentration of risks.
These potential scenarios suggest AI should be regulated but perhaps not in the traditional way.
Arguments against a strict regulation of AI
Strict regulation could run the risk of overregulation. Banks are already at a competitive disadvantage vs. nonregulated entities such as technology firms or fintechs, as these firms i) are good at exploiting vast amounts of data, and ii) have a cost advantage because they don’t have to adhere to the same compliance expectations as banks.10 Introducing strict regulations on AI development may discourage banks from innovating AI solutions and intensify a competitive disadvantage. Smaller banks, in particular, would be disadvantaged, as they might be unable to explain to the regulator how the decision-making process of a third-party licensed AI application works.11
Overregulation is, however, not the only potential issue for banks. The lack of clearly-specified AI regulation may prove disadvantageous by delaying bank-developed AI applications. If there is uncertainty or lack of clarity, banks might be cautious in proactively driving AI development forward, as a recent response by a major broker-dealer to a Financial Industry Regulatory Authority (FINRA) request suggests.12
AI regulation in the US banking sector
What have regulators done so far regarding AI? The US Treasury Department released a report, embracing the development of “competitive technologies” in the financial services sector.13 Meanwhile, the Financial Stability Board has publicly acknowledged the benefit of AI solutions to consumers as well as banks14. The Fed supports banks adopting AI because it is concerned that non-regulated entities may derive an advantage from using AI.15
However, it remains unclear how AI fits into the existing regulatory landscape.16 The Fed has said that it will be relying on existing regulatory and supervisory "guardrails" to assess the appropriate approach for AI processes,17 but this may not be sufficient.
Accept the inherent uncertainty in regulating AI
How then to strike a balance between too much and too little AI regulation? Perhaps there is a need to change the approach and realize that technologies like AI will continue to evolve, and no matter how comprehensive or up-to-date new regulations are today, they may not fully account for future developments. Regulators may also need to accept that a degree of uncertainty inherent to AI might always exist and opt for a more dynamic AI regulation which is flexible and adaptable to changes that may occur in the future. Work done by my colleagues at Deloitte's Center of Government Insights18 offers insights on how the future of regulation—not just banking regulation—might look.
In any case, regulators ought to move forward in a timely manner, and establish clear guidelines and rules for the development and deployment of AI at banks. Inaction, slow movement, or an attempt to get regulation "right and tight" may produce undesirable consequences.
1 “What Are We Learning about Artificial Intelligence in Financial Services?” November 13, 2018.
2 Arthur Bachinskiy. “The Growing Impact of AI in Financial Services: Six Examples.” Towards Data Science, February 21, 2019.
3 “The Future of Regulation.” Deloitte. June 19, 2018.
4 “Can AI be ethical?” Deloitte. April 19, 2019.
5 “Artificial intelligence and machine learning in financial services.” Financial Stability Board. November 1, 2017.
7 Kylie Foy. “Artificial intelligence system uses transparent, human-like reasoning to solve problems.“ MIT News. September 11, 2018.
8 Karen Hao. “This is how AI bias really happens—and why it’s so hard to fix.” MIT Technology Review. February 4, 2019.
10 Agustín Carstens. “Big tech in finance and new challenges for public policy.” Bank for International Settlements. December 4, 2018.
11 Kate Berry. “CFPB catches flak from banks, credit unions on risks of AI.” American Banker. December 6, 2018.
12 “Re: FINRA’s Request for Comment on Financial Technology Innovation in the Broker-Dealer Industry.” Credit Suisse. October 12, 2018.
13 “A Financial System That Creates Economic Opportunities.” US Department of the Treasury. July 2018.
16 Pamela L. Marcogliese, Colin D. Lloyd, and Sandra M. Rocks. “Machine Learning and Artificial Intelligence in Financial Services.” Harvard Law School Forum on Corporate Governance and Financial Regulation. September 24, 2018.
QuickLook is a weekly blog from the Deloitte Center for Financial Services about technology, innovation, growth, regulation, and other challenges facing the industry. The views expressed in this blog are those of the blogger and not official statements by Deloitte or any of its affiliates or member firms.