Skip to main content

EU AI Act adopted by the Parliament: What's the impact for financial institutions?

Background and current state of play

 

Considering the rapid developments in the artificial intelligence (AI) space, it seems like forever since the European Commission published its proposal on the new Artificial Intelligence Act (AI Act) back in April 2021.

Accelerated technology developments (notably the release of large language models such as OpenAI’s ChatGPT and Google’s Bard) triggered a lot of debate and brought additional complexity to the already demanding EU legislative process. There is a sense of urgency to address these emerging trends and to do it properly and comprehensively. This not only for the EU-sake but also to position Europe as a global leader in this space other jurisdictions will follow when considering their own approaches towards the regulation of the AI.

During the course of the legislative process, both the Council of the EU (in its own general approach from 25 November 2022) as well as the Parliament adopted its own position on 11 May 2023. These positions seemingly started to drift away from the Commission’s original proposal and therefore, the three legislative bodies will need to align on integral aspects of the AI Act.

One can expect that the “trilogue” meetings between the three bodies that will now commence will be fiery discussions in order for them to agree on the final version of the text. This said, for practical reasons if anything, it is still expected that the final text will be adopted before the next European elections scheduled in May 2024.

What the AI Act is about and the approach EU legislator took to regulate the AI

 

Regardless of several contentious points between the legislative bodies, the AI Act still provides a sound basis to understand the direction and the approach the EU legislator is taking in order to regulate AI in Europe.

The overall objective of the AI Act is simple, to increase the acceptance and trust in AI by European consumers. The path to get there is however more challenging. This is where the AI Act comes in and aims to achieve this objective by setting out harmonized rules for the development, placing on the market and use of AI systems in the EU.

When regulating AI, the European legislator opted for the “horizontal approach” by creating one technology-focused regulation that covers AI`s many impacts and use-cases. The AI Act is therefore not tailored for specific AI models or economic sectors such as financial sector. This could be left for a later stage when the legislator would create bespoke regimes for specific cases by means of secondary rulemaking (i.e. implementing acts).

Importantly, the AI Act follows a risk-based approach: for riskier AI systems, stricter rules will apply.

Following this logic, AI systems will be clustered within roughly three categories and subject to different requirements:

  1. An outright ban for certain AI systems that pose an unacceptable risk1,
  2. Stringent requirements for AI systems classified as high risk, and
  3. A more limited set of (mainly transparency) requirements for AI systems with a lower risk.

Who is affected by the AI Act?

 

Rarely does an EU regulation apply to entities outside the Union. When the EU legislator decides to adopt such an “extraterritorial scope,” it demonstrates the importance of a particular framework to EU's policies, objectives and internal market.

Along these lines, the AI Act will apply to all providers and users of AI systems, regardless if they were established within or outside the EU, as long as the output2 produced by the system is used in the Union.

To ensure European authorities can impose their supervisory powers towards non-EU players, the AI Act will furthermore require such third country providers of AI systems to appoint an authorized representative established in the Union.

Strong focus on “high risk” AI systems

 

High-risk AI systems can pose a significant risk3 of harm to health, safety or fundamental rights, in particular when such systems operate as digital components of products.

To prevent over-designation of systems as high-risk, there will be some exceptions from this rule, such as in case AI produces an output that is “purely accessory” and as such, is not likely to lead to a significant risk. The Commission will specify by means of implementing acts where the output of AI systems is to be considered as “purely accessory.”

Importantly, the Commission is empowered to amend the list of systems that are considered as high-risk AI.

The AI Act imposes stringent requirements for high-risk AI systems regarding the quality of data sets used, technical documentation and record-keeping, transparency and the provision of information to users, human oversight, and robustness, accuracy and cybersecurity.

Providers of such systems are required to ensure:

  • A sound quality management system;
  • A conformity assessment certification, prior of the placing of the system on the market;
  • Relevant technical documentation;
  • A robust post-market monitoring system;
  • Reporting to the national competent authorities (NCAs) any serious incidents resulting from the use of their AI systems.

Given the nature of AI systems and the risks to safety and fundamental rights possibly associated with their use, AI Act prescribes specific responsibilities for users of high risk systems as well. Users of high-risk AI systems are required to:

  • Use such systems in accordance with the instructions of use;
  • Assign human oversight to natural persons who have the necessary competence, training and authority;
  • Monitor the operation of these systems;
  • Inform the provider or distributor of any risks or incidents involved with the use of AI systems and suspend the use of the system if needed;
  • Ensure that input data is relevant in view of the intended purpose of the AI system, to the extent such data is under their control;
  • Keep automatically generated logs by the AI system, to the extent such logs are under their control (“record-keeping”);
  • Carry out data protection impact assessment based on the information provided by the provider of the AI system (e.g. in instructions of use); and
  • Cooperate with NCAs on any action those authorities take in relation to an AI system.

Future-proofing the AI Act for general-purpose AI and foundation models like GPT

 

General-purpose AI systems perform generally applicable functions such as image and speech recognition, audio and video generation, pattern detection, question answering, translation and others. As such, these systems may be used for a number of different purposes and more importantly, may be integrated into high risk AI systems or environments.

As these systems may be used in many different contexts and for different purposes, the Council proposed to address this topic in a future implementing act. The only aspect that the Council wishes to address by the AI Act are cases where such systems may be high risk AI systems themselves or components of other high-risk systems. In this scenario, the Council wants to ensure there is a proper information flow throughout the AI value chain.

Conversely, the Parliament is decisive to tackle general-purpose AI systems in the AI Act directly. To do this, the Parliament introduced a separate notion of “foundation models”. The power of these models lies in their versatility acquired through the large set of data sources used for training, making them exceptionally flexible. This means that each foundation model can be reused in countless downstream applications, whether for specific-intended-purpose or general-purpose AI systems. For this reason, the Parliament imposes stringent requirements for the foundation models, including an obligation to disclose when the AI system is trained with data protected under copyright laws.

How will the AI Act impact financial sector players?

 

The AI Act is a horizontal piece of legislation. As such, little attention has been given to AI tools deployed in the financial sector. The only explicit references to financial use-cases are credit scoring models4 and risk assessment tools in the insurance sector. In this context, AI systems used to evaluate the credit scores or creditworthiness of natural persons will likely be classified as high-risk, since they determine those persons’ access to financial resources. The same designation is expected for those AI systems that are used for risk assessment in the case of life and health insurance which, if not properly designed, can lead to serious consequences for people’s lives and health, including financial exclusion and discrimination.

The Parliament proposes that AI systems deployed for the purpose of detecting fraud in the offering of financial services should not be considered as high-risk under the AI Act.

However, to avoid overlaps with the existing requirements for financial institutions, the AI Act directly refers to financial regulation for the purposes of compliance with some of the requirements regarding the high-risk AI systems. There is a legal presumption that financial institutions fulfill some of these requirements—such as instructions on AI documentation, risk management processes and procedures, monitoring obligations, and duty to keep the logs automatically generated by the AI systems— by ensuring compliance with already stringent rules on internal governance arrangements or risk management processes in line with sectoral legislation.

Nonetheless, the AI Act still remains a primary legislative source that all financial institutions should follow to ensure their compliance when deploying such technology for the purpose of providing their services. Especially for those financial institutions that rely on AI systems designated as high-risk (such as credit scoring or certain insurance practices) and provide services to natural persons or retail clients. The list of high-risk AI systems remains dynamic and as such, will be changed on an ongoing basis.

Due to the high uncertainty on how the current debate on the regulation of the general-purpose AI systems will end, financial institutions should keep a close watch on this topic’s development. These models will have numerous applications in a fast-paced sector such as finance, as general-purpose AI could revolutionize how financial institutions approach content generation by allowing them to fine-tune these models for their own purposes.

In the light of the upcoming Digital Operational Resilience Act (DORA), financial institutions should start to consider how DORA’s imposed requirements interact with the obligations stemming from the AI Act. Special attention should be given to those aspects of DORA concerning the governance and management of ICT risks5, including the third-party risk management. Traditionally strong reliance of financial institutions on third-party ICT services will become even more prominent in the context of AI. Due to the lack of internal capabilities for the development of AI solutions, outsourcing to ICT service providers is expected to increase, as will the security issues and challenges to the governance framework of institutions, particularly internal controls, data management and data protection.

In terms of supervision, the AI Act assigns financial supervisory authorities to oversee financial institutions’ compliance with the requirements stemming from the AI Act, including the power to carry out ex-post market surveillance activities. No major shift is expected from a supervisory point of view: the same bodies already in charge of financial supervision in a particular Member State will integrate the AI Act and market surveillance activities into their existing supervisory practices under the financial services legislation.

The European Central Bank (ECB) shall maintain its prudential supervisory functions regarding credit institutions’ risk management processes and internal control mechanisms. NCAs responsible for the supervision of credit institutions that are participating in the Single Supervisory Mechanism (SSM) will report to the ECB any information identified in the course of their market surveillance activities that may be of potential interest to the ECB’s prudential supervisory tasks.

Finally, the AI Act calls for the establishment of a new, EU-level body to facilitate a smooth, effective and harmonized implementation of the AI Act: the European Artificial Intelligence Office. This body will be composed of representatives of the Member States and mandated to promote the interests of the AI eco-system and fulfill various advisory tasks, such as issuing opinions and recommendations.

Timeline for compliance

 

The Commission and the Parliament are aligned on the 24-month timeframe for application. The Council on the other hand is looking for a 36-month implementation of the new law.

There will be an exemption for certain high-risk AI systems that are already placed on the market or put into service before this date. The AI Act will apply to such systems only if, from that date, those systems are subject to substantial modifications in their design or intended purpose.

The new law imposes hefty fines for those breaching its requirements — for worst offenders, fines can reach up to EUR40 million or 7% of companies’ total worldwide annual turnover for the preceding financial year, whichever is higher.

How Deloitte can help?

 

Deloitte’s Regulatory Watch team closely follows digital finance developments and helps you stay ahead of the regulatory curve, while our AI and Data team has up-to-date experience in the latest AI techniques from strategy to development & implementation , where we prioritize human-centric and transparent AI at the core of what we do.

Combining this knowledge, Deloitte’s specialists and dedicated services can help you clarify the impact of the AI Regulation, identify any gaps, design potential solutions and take the necessary steps to put these solutions in place. We can support you in various critical areas such as AI strategy, business and operating models, regulatory and compliance, technology, and risk management.

These would be for example systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring (classifying people based on their social behavior, socio-economic status, personal characteristics) and most of the remote biometric identification systems.
2 Such as content for generative AI systems (e.g. text, video or images), predictions, recommendations or decisions, influencing the environment with which the system interacts, be it in a physical or digital dimension.
3 Significant risk is defined as a risk that is significant as a result of the combination of its severity, intensity, probability of occurrence, and duration of its effects, and it’s the ability to affect an individual, a plurality of persons or to affect a particular group of persons.
4 As credit scoring activities are regularly carried out by credit institutions in day-to-day practice, the European Central Bank has suggested that the entry into effect of the requirements for credit scoring models to be considered as ‘high-risk AI systems’ should be delayed until the adoption by the Commission of common specifications on the matter.
5 The definition of ICT risks provided under DORA is broad and therefore will include the risks connected to AI.

Did you find this useful?

Thanks for your feedback

If you would like to help improve Deloitte.com further, please complete a 3-minute survey