Posted: 13 Sep. 2024 5 min. read

The rise of AI fraud and how you can reduce your risk

By Ryan Hittner, Audit & Assurance Principal, Deloitte & Touche LLP, and Kirk Petrie, Audit & Assurance Managing Director, Deloitte & Touche LLP

Talking points
  • Generative AI technology provides bad actors access to sophisticated tools to commit fraud.
  • To help mitigate the risk of AI fraud, organizations should revisit their fraud risk management frameworks.
  • By following a few strategic tips, such as those outlined below, Risk Management and Internal Audit, among other functions, can play an important role in reducing AI-enabled fraud risks.

Over the past year, we’ve seen a flood of stories about the impressive benefits artificial intelligence (AI) can bring to businesses. But with this rush of AI exuberance, are we overlooking the potential for new types of AI-enabled fraud and criminal activity? 

Thanks to Generative AI (GenAI), bad actors now have access to sophisticated tools that can execute more complex fraud schemes on a large scale, potentially evading traditional detection methods. In a new report, Deloitte’s Center for Financial Services predicts that GenAI could drive a substantial increase in fraud losses in the United States: from some $12 billion in 2023 to $40 billion by 2027.

GenAI and the potential for fraud

As GenAI-enabled fraud schemes evolve, here are a few examples of the fraud we’re already seeing:

  • Synthetic identity and deepfake fraud: Many of us have seen examples of deepfakes in the media. These videos, audio recordings, and other synthetic media can impersonate individuals and mimic their speech patterns in a highly realistic way. A closely related and equally malicious type of fraud is synthetic identity creation. By that process, criminals use AI and other tools to forge various identity documents such as driver’s licenses and employee ID cards drawing on real and fraudulent personally identifiable information (PII). In both instances, bad actors exploit free, low-cost AI and readily available tools to evade human detection and elude less advanced authentication systems. 
  • Forged documents and financial statement manipulation: Bad actors can create highly convincing forged documents or reports (e.g., invoices, bank statements, shipping documents) that may bear replicated watermarks, letterheads, and even signatures. Such increased sophistication makes it more challenging for traditional verification processes to detect fraudulent documentation. Moreover, bad actors can use GenAI to create synthetic transactions. Innocuous and legitimate looking, they can find their way into operational systems such as point-of-sale terminals, inventory management systems, or enterprise resource planning (ERP) platforms. 
     

Impact and steps you can take to protect your organization

Beyond financial losses, AI-enabled fraud can put an organization’s trust, credibility, and brand at risk. If a company fails to protect its stakeholders or itself, it can lose the confidence of customers, investors, employees, and other stakeholders. 

So how can companies protect themselves? Deloitte has identified some specific steps your organization can take to bolster your fraud risk management framework and defend against AI-enabled fraud.

  • Risk assessment: Identify potential AI fraud vectors relevant to your organization, assess their likelihood and impact, and evaluate the effectiveness of existing controls. Consider GenAI’s emergent capabilities, including advanced reasoning and pattern recognition, as you develop and test your response plans for various AI fraud scenarios. 
  • Access and approval systems: Establish multiple levels of approval and implement multifactor authentication to verify identities for authorized and pertinent personnel dealing with cash disbursements and other transactions that require approval to help reduce the risk of AI-driven fraud. Schemes targeting cash disbursements are often cloaked in a sense of urgency and a false face of authority. Multiple levels of approval can also slow the process and provide greater opportunity to identify suspicious attributes or markers for certain transactions. All told, multifactor authentication can foil fraud attempts by making it more difficult for those using GenAI to impersonate employees authorized to approve transactions. 
  • Verification of documents: Implement rigorous verification processes for documents originating from third parties to combat the risk of AI-generated fraudulent documentation. For example, organizations can establish direct communication channels with issuers of critical documents rather than merely accepting provided documents at face value. Moreover, organizations can draw on independent sources and databases to cross-verify information. A multifaceted approach to verification can reduce the risk of your company falling victim to sophisticated, fraudulent AI-generated documentation.
  • Collaboration and information sharing: Establish a multidisciplinary team (e.g., internal audit, risk management, IT, cybersecurity, and professionals from other functions) to monitor relevant advances in AI technology and regularly update risk assessments, security protocols, and fraud detection systems geared to emerging AI capabilities.
  • Training and communication: Improve training so you can increase employee awareness of new and evolving types of fraud while also reinforcing appropriate courses of action to remedy any breaches.  
What role can Deloitte play?

Deloitte can advise you on identifying and responding to AI-enabled fraud. We have extensive experience with fraud risk management, the three lines of defense model, and other fraud prevention measures. To learn more, reach out to Ryan Hittner or Kirk Petrie.

 

This publication contains general information only and Deloitte is not, by means of this publication, rendering accounting, business, financial, investment, legal, tax, or other professional advice or services. This publication is not a substitute for such professional advice or services, nor should it be used as a basis for any decision or action that may affect your business. Before making any decision or taking any action that may affect your business, you should consult a qualified professional advisor. Deloitte shall not be responsible for any loss sustained by any person who relies on this publication.

The services described herein are illustrative in nature and are intended to demonstrate our experience and capabilities in these areas; however, due to independence restrictions that may apply to audit clients (including affiliates) of Deloitte & Touche LLP, we may be unable to provide certain services based on individual facts and circumstance.

Subscribe to receive The Pulse

Get in touch

Ryan Hittner

Ryan Hittner

Audit & Assurance Principal

Ryan is an Audit & Assurance principal with more than 15 years of management consulting experience, specializing in strategic advisory to global financial institutions focusing on banking and capital markets. Ryan co-leads Deloitte's Artificial Intelligence & Algorithmic practice which is dedicated to advising clients in developing and deploying responsible AI including risk frameworks, governance, and controls related to Artificial Intelligence (“AI”) and advanced algorithms. Ryan also serves as deputy leader of Deloitte's Valuation & Analytics practice, a global network of seasoned industry professionals with experience encompassing a wide range of traded financial instruments, data analytics and modeling. In his role, Ryan leads Deloitte's Omnia DNAV Derivatives technologies, which incorporate automation, machine learning, and large datasets. Ryan previously served as a leader in Deloitte’s Model Risk Management (“MRM”) practice and has extensive experience providing a wide range of model risk management services to financial services institutions, including model development, model validation, technology, and quantitative risk management. He specializes in quantitative advisory focusing on various asset class and risk domains such as AI and algorithmic risk, model risk management, liquidity risk, interest rate risk, market risk and credit risk. He serves his clients as a trusted service provider to the CEO, CFO, and CRO in solving problems related to risk management and financial risk management issues. Additionally, Ryan has worked with several of the top 10 US financial institutions leading quantitative teams that address complex risk management programs, typically involving process reengineering. Ryan also leads Deloitte’s initiatives focusing on ModelOps and cloud-based solutions, driving automation and efficiency within the model / algorithm lifecycle. Ryan received a BA in Computer Science and a BA in Mathematics & Economics from Lafayette College. Media highlights and perspectives First Bias Audit Law Starts to Set Stage for Trustworthy AI, August 11, 2023 – In this article, Ryan was interviewed by the Wall Street Journal, Risk and Compliance Journal about the New York City Law 144-21 that went into effect on July 5, 2023. Perspective on New York City local law 144-21 and preparation for bias audits, June 2023 – In this article, Ryan and other contributors share the new rules that are coming for use of AI and other algorithms for hiring and other employment decisions in New York City. Road to Next, June 13, 2023 – In the June edition, Ryan sat down with Pitchbook to discuss the current state of AI in business and the factors shaping the next wave of workforce innovation.