Generative AI is expected to magnify the risk of deepfakes and other fraud in banking

Fake content has never been easier to create—or harder to catch. As threats grow, banks can invest in AI and other technologies to help detect fraud and prevent losses.

Satish Lalchand

United States

Val Srinivas

United States

Brendan Maggiore

United States

Joshua Henderson

United States

In January 2024, an employee at a Hong Kong-based firm sent US$25 million to fraudsters after being instructed to do so by her chief financial officer on a video call that also included other colleagues. It turned out, however, that she wasn’t on a call with any of these people: Fraudsters created a deepfake that replicated their likenesses to trick her into sending the money.1

Incidents like this will likely proliferate in the years ahead as bad actors find and deploy increasingly sophisticated, yet affordable, generative AI to defraud banks and their customers.2 Deloitte’s Center for Financial Services predicts that gen AI could enable fraud losses to reach US$40 billion in the United States by 2027, from US$12.3 billion in 2023, a compound annual growth rate of 32% (figure 1).

How generative AI is making fraud a lot easier—and cheaper—to pull off

Generative AI offers seemingly endless potential to magnify both the nature and the scope of fraud against financial institutions and their customers; it’s limited only by a criminal’s imagination.

The astounding pace of innovations will challenge banks’ efforts to stay ahead of fraudsters. This is because generative AI-enabled deepfakes incorporate a “self-learning” system that constantly checks and updates its ability to fool computer-based detection systems.3

Specifically, the ready availability of new generative AI tools can make deepfake videos, fictitious voices, and fictitious documents easily and cheaply available to bad actors. There is already an entire cottage industry on the dark web that sells scamming software from US$20 to thousands of dollars.4 This democratization of nefarious software is making a number of current anti-fraud tools less effective.5

So, no wonder financial services firms are particularly concerned about generative AI fraud that accesses client accounts. One report found deepfake incidents increased 700% in fintech in 2023.6 For audio deepfakes alone, the technology industry is behind in developing tools to identify fake content.7

Some fraud types may be more vulnerable to generative AI than others. For example, business email compromises, one of the most common types of fraud, can cause substantial monetary loss, according to the FBI’s Internet Crime Complaint Center’s data, which tracks 26 categories of fraud.8 Fraudsters have been compromising individual and business email accounts through social engineering to conduct unauthorized money transfers for years. However, with gen AI, bad actors can perpetrate fraud at scale by targeting multiple victims at the same time using the same or fewer resources. In 2022 alone, the FBI counted 21,832 instances of business email fraud with losses of approximately US$2.7 billion. The Deloitte Center for Financial Services estimates that generative AI email fraud losses could total about US$11.5 billion by 2027 in an “aggressive” adoption scenario.

Banks have been at the forefront of using innovative technologies to fight fraud for decades. However, a US Treasury report found “existing risk management frameworks may not be adequate to cover emerging AI technologies.”9 While old fraud systems required business rules and decision trees, financial institutions today are commonly deploying artificial intelligence and machine learning tools to detect, alert, and respond to threats. For instance, some banks are using AI to automate processes that diagnose fraud and send the investigations to the appropriate team at the bank.10 Some banks are already incorporating large language models to detect signs of fraud, such as one used by JPMorgan for email compromises.11 Similarly, Mastercard is working to prevent credit card fraud with its Decision Intelligence tool, which scans a trillion data points to predict if a transaction is genuine.12

How banks can prepare for a new era of fraud prevention

Banks should focus on their efforts to fight generative AI-enabled fraud to maintain a competitive edge. They should consider coupling modern technology with human intuition to determine how technologies may be used to preempt attacks by fraudsters. There won’t be one silver-bullet solution, so anti-fraud teams should continually accelerate their self-learning to keep pace with fraudsters. Future-proofing banks against fraud will also require banks to redesign their strategies, governance, and resources.

The pace of technological advancements means banks won’t fight fraud alone as they increasingly work with third parties that are developing anti-fraud tools. Since a threat to one company is a potential threat to all companies, bank leaders can develop strategies to collaborate within and outside of the banking industry to stay ahead of generative AI fraud. This will likely require entities across the banking industry to work together. Banks should work with knowledgeable and trustworthy third-party technology providers on strategies, establishing areas of responsibility that address liability concerns for fraud among each party.

Customers, too, can serve as partners in helping prevent fraud losses. But customer relationships may be tested when determining whether a fraud loss is to be borne by customers or their financial institutions. Customers expect efficiency and security when using their money, and generative AI’s deepfake technology could disrupt these two goals. Banks have an opportunity to educate consumers and build awareness about potential risks and how the bank is managing them. Building this level of awareness will likely require frequent communication touchpoints, such as push notifications on banking apps that warn customers of possible threats.

Regulators are focused on the promise and threats of generative AI alongside the banking industry. Banks should be actively participating in the development of new industry standards. By bringing in compliance early during technology development, they can have a record of their processes and systems prepared in case it’s needed for regulators.

And finally, banks should invest in hiring new talent and training current employees to spot, stop, and report AI-assisted fraud. For many banks, these investments will be expensive and difficult; they’re coming at a time when some bank leaders are prioritizing managing costs. But to stay ahead of fraudsters, extensive training should be prioritized. Banks can also focus on developing new fraud detection software using internal engineering teams, third-party vendors, and contract employees, which can help foster a culture of continuous learning and adaptation.

Generative AI is expected to significantly raise the threat of fraud, which could cost banks and their customers as much as US$40 billion by 2027. Banks should step up their investments to create more agile fraud teams to help stop this growing threat.

About this prediction

Our prediction for generative AI adoption in fraud is based on historical trends and input from Deloitte professionals specializing in fraud and risk. We assigned a “generative AI fraud risk” score to each of the 26 types of fraud tracked by the FBI’s IC3 report. We assigned expected growth rates for different fraud types until 2027 under different scenarios of generative AI adoption: conservative, base, and aggressive. Assumptions made in the forecast are informed by our understanding of the differences in various fraud types.

Show more

BY

Satish Lalchand

United States

Val Srinivas

United States

Brendan Maggiore

United States

Joshua Henderson

United States

Endnotes

  1. Heather Chen and Kathleen Magramo, “Finance worker pays out $25 million after video call with deepfake ‘chief financial officer,” CNN, February 4, 2024.

    View in Article
  2. Jon Bateman, Deepfakes and synthetic media in the financial system: Assessing threat scenarios, Carnegie Endowment for International Peace, July 8, 2020.

    View in Article
  3. Alakananda Mitra, Saraju P. Mohanty, and Elias Kougianos, “The world of generative AI: Deepfakes and large language models,” Arxiv.org, February 8, 2024.

    View in Article
  4. Nabila Ahmed et al., “Deepfake imposter scams are driving a new wave of fraud,” Bloomberg, August 21, 2023.

    View in Article
  5. Hannah Murphy, “Deepfakes make banks keep it real,” Financial Times, September 20, 2023.

    View in Article
  6. Isabelle Bousquette, “Deepfakes are coming for the financial sector,” Wall Street Journal, April 3, 2024.

    View in Article
  7. Huo Jingnan, “Using AI to detect AI-generated deepfakes can work for audio — but not always,” NPR, April 5, 2024.

    View in Article
  8. Federal Bureau of Investigation, Internet crime report 2023, April 4, 2024.

    View in Article
  9. US Department of the Treasury, Managing artificial intelligence-specific cybersecurity risks in the financial services sector, March 2024.

    View in Article
  10. Edmund Lawler, “Banks face the twin-edged sword of generative AI,” BAI, March 4, 2024.

    View in Article
  11. Penny Crosman, “JPMorgan Chase using advance AI to detect fraud,” American Banker, July 3, 2023.

    View in Article
  12. Mastercard, “Mastercard supercharges consumer protection with gen AI,” press release, February 1, 2024.

    View in Article

Acknowledgments

The authors would like to thank Andrew Myers, Advisory manager of Deloitte & Touche LLP, for his contributions to this article.

Cover art by: Natalie Pfaff