Analysis

The role of enterprise risk management in generative AI

5 min read

The arrival of generative AI

Generative AI, the technology which produces new content, has gained immense popularity in recent months. Whilst the hype around AI has been gathering pace for several years, it is the introduction of generative AI tools that have accelerated the conversation about AI and its potential impact on society and business.

Businesses are excited about the potential opportunities that AI offers. As AI systems develop more human-like intelligence, and capabilities far beyond human level cognition in certain tasks, it sparks conversations about the future of work. AI will impact every industry from healthcare to banking and it is estimated that by 2025, 10% of all data produced will be generated by AI (up from less than 1% in 2021)1.

As highlighted here, there are some specific risks associated with Generative AI, specifically in relation to an often misguided level of confidence displayed by Large Language Models (LLMs), and limited visibility of the underlying data sources which contribute to each response. Given the way LLM responses are formulated, many refer to LLMs as ‘stochastic parrots’.

There are also concerns about data security, with some companies banning generative AI following sensitive data leaks2 , and an expected spike of legal cases in relation to the alleged training of generative AI tools on data without permission3.


Business adoption of generative AI

The majority of generative AI use today is by individuals as personal productivity tools. Individuals are using these tools to access the collective power of the internet in a faster, more immersive manner.

However, given the incredible opportunity, businesses are racing towards enterprise adoption. Indeed the Deloitte State of AI 5th Edition report highlighted that 94% of business leaders see AI as important to their organisation’s success, whilst only 27% of respondents thought their organisation had ‘identified and largely adopted leading practices associated with strong AI outcomes’.

The corporate use cases are effectively limitless, popular Enterprise Resource Planning (ERP) systems alongside many other common software providers have already released generative AI plug-ins. As organisations inevitably start to embed this technology into their operations, the risks need to be carefully considered.

The threats associated with generative AI are amplified when used by organisations. If an LLM creates inaccurate content for an individual that is one thing, but if an organisation has embedded an LLM into a business process which supports decision making, then the real-world societal implications are potentially much greater.

As organisations embed these tools into their processes and they start creating content, it raises interesting questions over accountability and compliance. Where AI makes accounting judgements, creates marketing images, or assesses the suitability of a customer for a particular product – these processes and outcomes will require due consideration of accountability, and steps should be taken to mitigate negative outcomes.


The role of enterprise risk management

The regulatory environment for AI is evolving globally, with the EU AI Act expected to be enforced from 2024 and the White Paper on UK Government’s approach to AI regulation being recently released. These regulations will undoubtedly shape the approach organisations take to managing AI risk, but given the pace at which this technology is evolving, and the opportunity for organisations to do things faster and more efficiently with AI, it is likely the regulatory environment will struggle to keep pace with the technological advancements and speed of industry adoption.

As such the risk management community and those charged with governance will play a key role in helping their organisations develop their approach to AI governance, so they can implement this technology in a safe and controlled manner.

So, what can risk management teams and those charged with governance do to help their organisation become AI ready?

The easiest first step is to the start the conversation, and these are some simple questions to ask of your organisation:

  • Have we got an AI adoption strategy?
  • Do we know what AI we are using? Are we confident we have a complete picture?
  • Do we assess the risks associated with an AI system before implementing it?
  • Have we given our staff guidance on using generative AI tools safely in line with our data and AI ethics principles?
  • Have we determined our risk appetite with regard to generative AI adoption?

From this you will begin to build a picture of your organisation’s maturity level and build out a plan to develop your approach to managing AI risk. See Figure 1 for some considerations.

Figure 1: AI specific Enterprise Risk Management considerations

 


An example risk management framework has been created by National Institute of Standards and Technology (NIST) which could be used to inform your AI Risk Management strategy. Risk management can also help the organisation invest appropriately in AI by challenging business cases and assuring project management approaches to AI development and spend.

The rapid emergence of this technology means there are huge upside opportunities for organisations who adopt it successfully. However, to utilise this technology to its full potential, organisations need to fully understand the downside risks and implement a risk intelligence approach to ensure generative AI is adopted safely and can be used with confidence.

This article is the second in a three-part series on what each line of defence should do to manage the risks associated with AI use.

Part one was for the third line of defence and looked at what the Internal Audit community can do to ensure their organisation is ready to both benefit from, and manage the risks associated with AI.

Part two above is aimed at the second line of defence, and also reflects how the arrival of generative AI has changed the landscape for risk professionals.

_________________________________________________________________________________________________________________

References


1 Gartner, Gartner Identifies the Top Strategic Technology Trends for 2022, 18th October 2021

2 Bloomberg, Samsung Bans ChatGPT, Google Bard, Other Generative AI Use by Staff After Leak - Bloomberg, 2nd May 2023

3 Matthew Buterick, GitHub Copilot litigation, accessed 17th May 2023

Fullwidth SCC. Do not delete! This box/component contains JavaScript that is needed on this page. This message will not be visible when page is activated.

Did you find this useful?