Posted: 17 Apr. 2023 5 min. read

Generative AI - Risks and controls

This follows Deloitte’s previous blog on generative AI risk and ethical considerations.

Over the past few months there has been extensive discussion across industry around the capabilities of generative AI technology and its potential to revolutionise the way we work. Whilst it is clear that these technologies will facilitate increased efficiencies, there are also some risks to consider.

In this blog we touch on some of the risks and controls for firms to consider as they embark on their journey to generative AI adoption.

Fraud

It is without question that foundation models such as the GPT family and BERT have the potential to accelerate the pace at which firms can research, produce, and document content.  The capabilities of these and other models could however be adopted by those seeking to commit fraud including for example falsifying invoices, transactions or even supporting with the development of fake identities.  Firms relying on inputs from third parties to support business decisions (e.g. the insurance or lending sector) should consider whether they have sufficient controls in place to identify whether the evidence provided is genuine. 

Reputational

Many generative AI systems are yet to incorporate ethics into their decision making and their results are reliant on the data upon which they have been trained. This could result in the AI system performing in an unexpected manner, including creating outputs that are not aligned with an organisation’s own ethical principles. Firms should consider the extent to which they have transparency over the training data used and level of testing conducted to identify potential issues such as bias or discrimination. 

Financial 

Algorithms and machine learning models have already been adopted by financial institutions to support with trading, investment and credit decisions. As firms consider the adoption of generative AI technology there is an increased risk that unidentified flaws or inadequate data could result in financial losses.  To mitigate against this risk it is important to ensure AI systems are extensively tested prior to deployment and consider the need for human oversight in areas of higher risk.  Firms should also consider the establishment of monitoring controls and alerts to identify if the AI system is performing in manner that was not originally intended.

Regulatory 

Recently there has been a surge in activity by governments and regulatory bodies focussed on ensuring firms have adequate AI governance structures and controls in place with some countries now outright banning the use of certain generative AI technologies. Failure to comply with these requirements could expose firms to regulatory fines (e.g. 10% of global revenues under the proposed EU AI Act). Firms need to consider whether their development or use of generative AI technology will fall under the scope of these new requirements and put appropriate plans in place to ensure they are regulatory ready. At the core of many of these regulations is the need for appropriate governance and controls over AI development and usage. 

Privacy & Technology

Major questions persist around the presence of personal data within the datasets used to train generative AI systems. Opacity around what data was collected, for what purpose, and how it is used is likely to result in increased risks for firms creating generative AI systems or utilising their outputs. Firms will need to consider how to navigate these fluid challenges as they implement data privacy controls. This includes, for example, new policies around data retention and access rights to data relating to requests via the generative AI system. The resilience of technology and cloud infrastructure will also need further consideration, specifically for firms with a large number of employees and/or clients adopting generative AI.

Generative AI can also create new vectors for cybersecurity risk and adversarial uses which are difficult to predict. For instance, generative AI poses risks to biometric security systems, like facial recognition.

Legal 

There has been much debate recently on the topic of the legal ownership of content produced by generative AI. Laws and legal interpretations may differ across jurisdictions and firms should consider the extent to which they will own the intellectual property rights over any content produced. Early engagement with legal experts on this will reduce the risk of potential disputes over ownership in future years. Firms should also seek to identify any other potential current or future risk to litigation relating to the use of this AI technology, including for example the proposed EU AI Liability Directive. 

Next steps

Generative AI systems are clearly complex in nature and navigating the potential risks can be challenging.  We set out below some key take-aways for firms to consider as generative AI usage becomes more widespread:

  • Identify where generative AI could be used internally or by third parties such as clients, suppliers or other key stakeholders.
  • Determine whether this technology presents any new or incremental risks or regulatory obligations to your firm.
  • Define your firm’s risk appetite for generative AI adoption and develop related policies and procedures.
  • Assess the completeness and adequacy of the design of existing controls, including the requirement for additional policies and procedures
  • Remediate any control gaps identified to ensure risks associated with generative AI are adequately mitigated against

Please read our recent report on the “implications of generative AI on business” for more insights on this topic.

Should you wish to discuss this topic further or require support with considering the risks posed by AI and the necessary enhancements to your Control Framework, please don’t hesitate to get in touch with our AI Assurance team here. 

 

Key contacts

Mark Cankett

Mark Cankett

Partner

Mark is a Partner in our Regulatory Assurance team. He is our AI Assurance, Internet Regulation and Global Algorithm Assurance Leader with 20 years of experience across financial services audit and assurance, regulatory compliance, regulatory investigations and disputes. He has led the development of our assurance practice as it relates to our approach to assisting firms gain confidence over their algorithmic and AI systems and processes. He has a particular sub-sector specialism in the area of algorithmic trading with varied experience supporting firms enhance their governance and control environments, as well as investigate and validate such systems. More recently he has supported and led our work across a number of emerging AI assurance related engagements.

Barry Liddy

Barry Liddy

Director

Barry is a Director at Deloitte UK, where he leads our Algorithm, AI and Internet Regulation Assurance team. He is a recognised Subject Matter Expert (SME) in AI regulation, has a proven track record of guiding clients in strengthening their AI control frameworks to align with industry best practices and regulatory expectations. Barry’s expertise extends to Generative AI where he supports firms safely adopt this technology and navigate the risks associated with these complex foundation models. Barry also leads our Digital Services Act (DSA) & Digital Markets Act (DMA) audit team providing Independent Assurance over designated online platforms' compliance with these Internet Regulations. As part of this role, Barry oversees our firm's assessment of controls encompassing crucial areas such as consumer profiling techniques, recommender systems, and content moderation algorithms. Barry’s team also specialises in algorithmic trading risks and controls and he has led several projects focused on ensuring compliance with relevant regulations in this space.

Roger Smith

Roger Smith

Associate Director

Roger is an Associate Director in Deloitte’s Banking & Capital Markets Group, with a particular focus on algorithmic trading and regulatory change. He has a deep understanding of algorithms, gained through over 20 years of experience at major financial institutions in the UK and in Europe, and has developed a thorough technical knowledge of the regulatory landscape that governs algorithms and machine learning systems. As a senior member of the algorithm assurance team, Roger was instrumental in the design and development of Deloitte’s methodology for assessing algorithm risk management frameworks and performing algorithm validations.