Digitalization: European Commission’s first legal framework on trustworthy AI

News

Digitalization: European Commission’s first legal framework on trustworthy AI

23 April 2021

Regulatory News Alert

Context and objectives

On 21 April 2021, the European Commission (EC) proposed the first-ever Regulation (“AI Regulation”) laying down harmonized rules on artificial intelligence (AI) and amending certain EU legislative acts. It particularly concerns the following:

  • Harmonized rules for marketing, using, and putting AI systems into service in the EU.
  • Prohibitions of certain AI practices that exploit vulnerabilities or materially distort behavior to an extent that could cause physical or psychological harm.
  • Specific requirements for high-risk AI systems and obligations of operators and users of these systems.
  • Harmonized transparency rules for AI systems that interact with natural persons, detect emotions or categorize biometrics, and generate or manipulate image, audio or video content.
  • Rules on market monitoring and surveillance.
PDF - 813kb

Scope of application

The AI Regulation defines an AI system as software developed for a given set of human-defined objectives, which generates outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

Given that AI systems can be easily deployed in multiple sectors and circulate throughout the EU or across borders, this AI Regulation will apply to both public and private actors inside and outside the EU if the AI system is put in place in the EU or its use affects people located in the EU, particularly:

  • Providers introducing AI systems on the market or putting AI systems into service in the EU, irrespective of whether they are established within the EU or in a third country;
  • Users of AI systems (e.g., a bank using a resume screening tool) located within the EU; and
  • Providers and users of AI systems that are in a third country, where the AI system’s output is used in the EU.


Risk-based approach for trustworthy AI

The AI Regulation follows a risk-based approach that distinguishes between uses of AI that create an unacceptable risk; a high risk; and low or minimal risks.

AI systems deemed to create an unacceptable risk shall be prohibited. This includes AI systems or applications that manipulate human behavior to circumvent users' free will and systems that allow “social scoring” by governments.

AI systems are classified as high-risk if they are intended to be used as safety components of a product or where stand-alone AI systems may affect fundamental rights, such as law enforcement, employment or essential public services.

Before placing a high-risk AI system on the EU market or putting it into service, providers must subject the system to a conformity assessment. A third-party conformity assessment is required if the AI system is intended to be used as a safety component of a product.

High-risk AI systems shall be placed on the EU market or put into service if they comply with certain mandatory requirements. These include:

  • A risk management system shall be established, implemented, documented and maintained, including consideration of expected user knowledge and residual risks considering reasonably foreseeable misuses.
  • High-risk AI systems that use techniques involving the training of models with data shall be developed based on high-quality training, validation and testing data sets that are subject to appropriate data governance practices.
  • The technical documentation of a high-risk AI system shall be drawn up before the system is placed on the market or put into service and shall be kept up to date.
  • High-risk AI systems shall be designed and developed with capabilities that enable the automatic recording of events (logs) while the high-risk AI systems are operating. These logging capabilities shall ensure a level of traceability of the AI system’s function throughout its lifecycle that is appropriate to the system’s intended purpose.
  • The operation shall be sufficiently transparent to enable users to interpret the system’s output and use it appropriately.
  • High-risk AI systems shall be effectively overseen by natural persons (human oversight) when the AI system is in use. This should include the ability to override, reverse or intervene in the output.
  • High-risk AI systems shall be designed and developed with an appropriate level of accuracy, robustness and cybersecurity, and consistently perform to these levels throughout their lifecycle.

For AI systems with low or minimal risk, the EC proposes to encourage and facilitate the creation of codes of conduct to foster the voluntary application of the requirements that apply to high-risk AI systems, including technical specifications and solutions, environmental sustainability, etc. These codes of conduct may be drawn up by individual providers of AI systems, by organizations representing them, or both.


Next steps

The European Parliament and Member States will need to adopt the EC's proposal following the ordinary legislative procedure. Once adopted, the AI Regulation will be directly applicable across the EU.

The AI Regulation shall enter into force on the 20th day following its publication in the Official Journal of the EU. Most of its provisions shall apply 24 months following its entry into force, therefore around 2024.


How does it affect your institution?

To some extent, AI systems are implicitly regulated under the internal governance system of the EU’s legislation on financial services that applies to financial institutions (FIs), particularly credit institutions (CIs).

  • FIs that provide or use AI systems shall be in scope of the AI Regulation, and its provisions may affect FIs' planned use of AI and their digital innovation strategies.
  • The market surveillance authority for the AI Regulation shall be the relevant authority responsible for the financial supervision of FIs.

This may also impact certain areas of trading (such as algo-trading), client payment management, and investments where AI systems may be used.

Regarding AI systems provided or used by regulated CIs, the EC emphasizes consistency between the AI Regulation and Directive 2013/36/EU (CRD IV):

  • The conformity assessment procedure and some of the providers’ procedural obligations regarding risk management, post-marketing monitoring and documentation will be integrated into the CRD IV’s existing obligations and procedures.
  • To avoid overlaps between the two legislations, limited derogations are provided relating to the quality management system of providers and the monitoring obligation placed on users of high-risk AI systems to the extent that these apply to CIs regulated by CRD IV.

The EC also proposes that Member States shall lay down the rules on penalties and administrative fines that apply to AI Regulation infringements, which can be:

  • EUR10 million to EUR30 million; or
  • 2% to 6% of the total worldwide annual turnover for the preceding financial year.


How can Deloitte help you?

Deloitte’s advisory specialists and dedicated services can help you clarify the impact of the AI Regulation, identify any gaps, design potential solutions and take the necessary steps to put these solutions in place. We can support you in various critical areas such as strategy, business and operating models, regulatory and compliance, technology, and risk management.

Deloitte’s Regulatory Watch Kaleidoscope service helps you stay ahead of the regulatory curve to better manage and plan upcoming regulations.

Contacts

Subject matter specialists

Jean-Pierre Maissin
Partner – Strategy, Analytics & M&A
and EU Leader
Tel: +352 45145 2834
jpmaissin@deloitte.lu

Charles Delancray
Partner - Core Business Operations
Tel : +352 45145 2618
cdelancray@deloitte.lu

Nicolas Griedlich
Director – Artificial Intelligence & Data
Tel: +352 45145 4052
ngriedlich@deloitte.lu

Anke Joubert
Manager – Artificial Intelligence & Data
Tel: +352 45145 2463
ajoubert@deloitte.lu


Regulatory Watch Kaleidoscope service

Simon Ramos
Partner – IM Advisory & Consulting 
Leader
Tel: +352 45145 2702
siramos@deloitte.lu

Jean-Philippe Peters
Partner – Risk Advisory
Tel: +352 45145 2276
jppeters@deloitte.lu

Benoit Sauvage
Director – Risk Advisory
Tel: +352 45145 4220
bsauvage@deloitte.lu

Marijana Vuksic
Senior Manager – Risk Advisory
Tel: +352 45145 2311
mvuksic@deloitte.lu

Insert CSS fragment. Do not delete! This box/component contains code needed on this page. This message will not be visible when page is activated.

Did you find this useful?