Posted: 12 Jul. 2022 3 min. read

Risk management in the new era of AI regulation

Considerations around risk management frameworks in line with the proposed EU AI Act

Gradually, Artificial Intelligence (AI) has positioned itself as an irreplaceable part of business. Many firms are relying on AI systems for a growing range of activities, from credit scoring to product pricing. The rise of AI has triggered a wave of regulatory initiatives, most notably the proposed EU AI Act. It aims to define guidelines for the trustworthy use of AI and outlines a set of requirements for high-risk AI systems.

In this blog, we will explore in detail Article 9 of the proposed EU AI Act and look at the key considerations for firms around building effective AI risk management frameworks.

Key risk management considerations

The requirements for high-risk AI systems are the building blocks of the EU AI Act. The first point for high-risk AI system providers1  to consider, as outlined in Article 9, is the need to establish, implement, document, and maintain effective risk management systems in relation to their AI systems.

Identification of all AI risks

High-risk AI system providers will be required to conduct periodic risk assessments to identify all known and foreseeable risks associated with their AI systems. These assessments should be clearly documented and available for audit when requested by national competent authorities.

Article 9 asks firms to estimate and evaluate the risks that may emerge when the AI system is used in accordance with its intended purpose as well as under conditions of reasonably foreseeable misuse. The ‘foreseeable misuse’ wording is key as it will add an extra layer of complexity for firms when it comes to AI risk management. The definition provided for ‘foreseeable misuse’ in the regulatory text is ‘the use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems’. An example for misuse could be the illegal collection of personal data which in turn could be used to influence people’s social behaviours. 

Implementation of adequate control measures

The need to conduct comprehensive risk assessments brings with it the requirement to implement appropriate controls to mitigate the identified risks. Article 9 necessitates firms to put significant considerations around their existing control frameworks and re-examine them in the light of the new regulation.

Control measures should be designed to address the trustworthy AI guidelines as outlined in the other articles of the proposal. Considerations should be given to the principles of transparency, data management, robustness, accuracy, cyber-security, human oversight, and record keeping. In line with Article 9, controls and risk management systems should be periodically reviewed and tested against preliminarily defined metrics and probabilistic thresholds (e.g. potential instances of misuse). For risks that cannot be eliminated, firms should develop adequate mitigation plans.

Post-market risk monitoring

Monitoring risk does not end once the high-risk AI has been developed. Understanding and mitigating risks after the deployment of the high-risk AI systems is a key component of effective risk management systems and another important principle set out in Article 9. Firms will be required to consider post-market risk monitoring as a key component in their risk management systems.

This will necessitate AI providers to establish sophisticated communication channels across supply lines and track their AI systems after they are deployed. Information from these channels should feed into the relevant teams with oversight responsibilities. Comprehensive remediation plans should be developed to mitigate any risk or failure as a result of the functioning of the AI system once it is placed into production and distributed to users. 

User-centric requirements

The EU AI Act is heavily user-centric and inevitably this results in additional requirements which firms should consider. One of these is the end-user profiling, where for example Article 9 requires considerations whether children will access the AI system. If this is the case, appropriate restrictions should be put in place. To achieve this, end-user profiling should be incorporated into the risk assessments conducted before deployment. Fit-for-all controls will not be sufficient, and firms will have to implement tailor-made control measures for different user groups.

Continuing the focus on users, Article 9 includes the need for technical knowledge, experience, education, and training. Contrary to the popular perception, however, these are expected not only from the firms’ staff but also from the end-users of the high-risk AI system. Therefore, as part of their risk management frameworks, firms should give due consideration to the technical knowledge and experience of the end-users of the AI systems. AI providers are expected to have adequate provision of information and where appropriate, training to the end-users of their systems.


The road ahead for AI providers

The EU AI Act outlines the direction European regulators are taking to ensure that there are adequate rules around the development and use of high-risk AI systems. High-risk AI providers will need to perform Conformity Assessments – the primary deliverable to evidence that their AI systems are compliant with the EU AI Act. Article 9 is one of the core requirements of the Conformity Assessments and sets out the expectations for effective risk management systems. 

The Act brings new principles and requirements for firms to consider. Understanding the social and ethical aspects of AI systems will be an essential part in establishing robust risk management systems. Firms should re-examine their existing risk management measures and identify any gaps against the new regulatory requirements. It will be essential that the Board and Senior Management play a leading role in setting the culture, from top to bottom, with robust governance structures, adequate oversight, and appropriate control measures in place.

The changes which the proposed EU AI Act brings are substantial but early action will enable firms to future-proof their systems for any upcoming AI related regulation. While the EU AI Act is drawing increasing attention, in our view more jurisdictions will start implementing legislative frameworks around AI, focusing on the principles of trustworthiness and ethics. To understand more about the proposed EU AI Act, the global AI regulatory environment and our algorithm/AI assurance services, please get in touch. 

___________________________________________________________________________________

References

1Definition of an AI provider - a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market. AI distributors, importers, users or any other third-party are considered to be AI providers if they place a high-risk AI system on the market under their own trademark or name and the requirements in Article 9 become applicable to them, too.

Resources:

Proposal and Annex to the Proposal for a Regulation of the European Parliament on Artificial Intelligence: EUR-Lex - 52021PC0206 - EN - EUR-Lex (europa.eu)

Understanding the proposed EU AI Act | Deloitte UK

Key contacts

Mark Cankett

Mark Cankett

Partner

Mark is a Partner in our Regulatory Assurance team. He is our AI Assurance, Internet Regulation and Global Algorithm Assurance Leader with 20 years of experience across financial services audit and assurance, regulatory compliance, regulatory investigations and disputes. He has led the development of our assurance practice as it relates to our approach to assisting firms gain confidence over their algorithmic and AI systems and processes. He has a particular sub-sector specialism in the area of algorithmic trading with varied experience supporting firms enhance their governance and control environments, as well as investigate and validate such systems. More recently he has supported and led our work across a number of emerging AI assurance related engagements.

Barry Liddy

Barry Liddy

Director

Barry is a Director at Deloitte UK, where he leads our Algorithm, AI and Internet Regulation Assurance team. He is a recognised Subject Matter Expert (SME) in AI regulation, has a proven track record of guiding clients in strengthening their AI control frameworks to align with industry best practices and regulatory expectations. Barry’s expertise extends to Generative AI where he supports firms safely adopt this technology and navigate the risks associated with these complex foundation models. Barry also leads our Digital Services Act (DSA) & Digital Markets Act (DMA) audit team providing Independent Assurance over designated online platforms' compliance with these Internet Regulations. As part of this role, Barry oversees our firm's assessment of controls encompassing crucial areas such as consumer profiling techniques, recommender systems, and content moderation algorithms. Barry’s team also specialises in algorithmic trading risks and controls and he has led several projects focused on ensuring compliance with relevant regulations in this space.

Dzhuneyt Yusein

Dzhuneyt Yusein

Assistant Manager

Dzhuneyt is an Assistant Manager within the Markets Assurance team of the Banking & Capital Markets Audit & Assurance Group. Dzhuneyt’s experience covers major regulatory transformation projects such as the IBOR transition and MiFID II Transaction reporting for a variety of FS clients across regions, including Tier 1 banks. His core skills and previous experience result in his detailed understanding of internal control processes and frameworks in banks and other financial institutions.