Perspektiver

EU's new approach to AI

To ensure a trustworthy development of AI, EU has proposed a legal framework regulating the use of AI. But what does that mean and how do we prepare?

Three years after the General Data Protection Regulation (GDPR) entered into force the European Commission published the world’s first proposal for a legal framework regulating specific uses of AI. The AI Act introduces a risk-based approach to AI systems with life cycle requirements to ensure the development of trustworthy AI. As with other known regulations, the EU relies on hefty fines to ensure compliance. Fines for violation of the AI Act can as such amount up to 6% of global turnover, or up to 30 million euros, which is an even higher level of fines than those introduced with the GDPR. 

But what is it exactly that the EU is attempting to do with this new legal framework and what can companies and organisations do to prepare?

Background

On April 21st, 2021, the EU AI Act proposal was published as a part of the strategy “Shaping Europe’s digital future” The Commissions goal with the AI Act is for the EU to be leading regarding international regulation and in driving innovation while protecting fundamental rights of EU citizens in the age of AI. 

The AI Act builds on the Ethics Guideline for Trustworthy AI published by the independent expert High-Level Expert Group on Artificial Intelligence that was set up by the European Commission in June 2018. The Guidelines put forward the following 7 key requirements that AI systems should meet in order to be deemed trustworthy:

  • Human agency and oversight
  • Technical Robustness and safety
  • Privacy and data governance
  • Transparency
  • Diversity, non-discrimination and fairness
  • Societal and environmental well-being
  • Accountability

These are fundamental principles of lawfulness, ethics, and robustness put forth with the purpose of creating AI systems free of bias that can exist and work within the boundaries of ethical standards.

Risk based approach

The proposal uses a risk-based approach to differentiate between four types of AI systems, based on their potential risks for fundamental rights and safety for EU citizens. The four levels of risks are:

  • Unacceptable risk
  • High risk
  • Limited risk 
  • Minimal or no risk.

Unacceptable risk 

All AI systems that can be considered a clear threat to the safety, livelihoods and rights of people will be prohibited within the EU. This includes: 

  • Real-time remote biometric identification in publicly accessible spaces for law enforcement purposes (with some exceptions). An example would be the use of facial recognition software with surveillance cameras monitoring public spaces. 
  • Manipulation of human behavior opinions and decisions (e.g., Toys using voice assistance that encourages dangerous behavior)
  • Classification of people based on their social behavior (social scoring)

The narrow exemptions for real-time remote biometric identifications are strictly defined and regulated. They are permitted only when necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify, or prosecute a perpetrator or suspect of a serious criminal offence. And only permitted by authorisation of a judicial or other independent body, to appropriate limits in time, geographic reach and the data bases searched.

High-risk

AI systems in the high-risk category are the main focus of the AI Act. The high-risk systems are permitted but subject to strict obligations before being put to market. Included in the high-risk category are AI systems that are used as a safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex II and require a third-party conformity assessment.

When high-risk systems are used in the specific areas below, they are deemed high-risk:

  • Biometric identification and categorization of natural persons
  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment, workers management and access to self-employment
  • Access to and enjoyment of essential private services and public services and benefits
  • Law enforcement
  • Migration, asylum, and border control management
  • Administration of justice and democratic processes

Limited risk

AI systems that fall under the limited risk category are permitted but with the requirements of transparency. Transparency requirements apply to AI systems that interact with humans (e.g., Chatbots), are used to detect emotions or determine categories based on biometric data or are used for creating manipulated content (e.g. deepfakes where AI software can be used to manipulate a video by adding a celebrity’s or politicians face to someone else’s body).

Minimal or no risk 

AI systems that do not fall in one of the other categories mentioned above are free to be used without requirements. However, the AI Act mentions the possibility to adopt a code of conduct to follow suitable requirements and to ensure that these AI systems are indeed trustworthy.    
 
Who does the AI Act apply to?

The AI Act applies to providers who develop an AI system or have an AI system developed with the intent of placing the AI system on the market or putting AI systems into service in the EU under the providers own name or trademark. This applies irrespective of whether those providers are established within the EU or in a third country. In addition, it applies to users of AI systems located within the EU; and providers and users of AI systems that are located in a third country, where the output produced by the system is used in the EU.
 
Importers of AI systems from outside the EU who places the AI system on the market or puts the AI system into service within the EU or distributers who makes an AI system available on the European market are also subject to requirements in the AI Act.                     
 
Cradle to grave requirements for high-risk AI systems

The AI Act sets forth requirements to ensure that AI systems are trustworthy throughout their lifecycle. Providers of high-risk AI systems are subject to the biggest part the requirements in the AI Act, including the requirements listed below. Providers are also responsible for ensuring the conformity assessment and CE marking of their AI systems. 

User of high-risk systems are to a lesser degree subject to requirements, however the user must among other things still ensure that the high-risk AI systems are being used in accordance with instructions and must continuously monitor the AI systems activity.  
 
Risk Management System

A provider is required to establish, implement, document, and maintain a risk management system to identify risks associated with the high-risk AI system and to adopt suitable risk management measure.  The risk management system must be a continuous iterative process running throughout the entire lifecycle of a high-risk AI system and must be systematically updated. 
 
Data and Data Governance

When training models for high-risk AI systems, providers must ensure that the dataset used is of a sufficiently high quality. Hence, training, validation and testing datasets must be subject to appropriate data governance and management practices and the datasets must be relevant, representative, free of errors and complete.
 
Technical Documentation

Technical documentation to demonstrate that the high-risk AI system complies with the requirements in the AI Act must be drawn up before placing the system on the market or put into service and must be continuously updated. 
 
Record Keeping

High-risk AI systems must be designed with automatic record keeping of events (‘logs’). The logging must be in accordance with recognised standards or common specifications and monitor occurrence of situations that might result in the AI system constituting a risk. 
 
Transparency & Information

High-risk AI systems must be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately. Instructions and information for users about the AI system must be concise, complete, and correct and clear.   
 
Human oversight

Design and development of high-risk AI systems must be with integration interface tools to ensure that human oversight is possible while the AI system is in use. The aim is to prevent or minimise the potential risks to health, safety or fundamental rights that may emerge when an AI system is in use. In a worst-case scenario, a human must be able to intervene on the operation or stop the AI system.
 
Accuracy, Robustness and Cybersecurity

Maintaining an appropriate level of accuracy, robustness and cybersecurity throughout the lifecycle is a requirement for high-risk AI systems.
 
When will the new rules apply and how to begin preparing for the Act?

The proposal for the AI act published by the European Commission is currently being discussed in the Council and the European Parliament before the final text is to be set and adopted. The expectation is that this will happen during 2023. When the final text is adopted, there will be a two-year implementation period before the regulation enters into force, like we know it from when the GDPR entered into force. Even though two years might seem like a long time to implement the new requirements two years goes by quickly when preparing an organization for new legislative requirements, so Deloitte recommends starting sooner rather than later. 

In order to ensure future compliance, eliminate risk and send a clear and trustworthy message into the market, AI providers should start preparing for the AI Act with the following tasks: 

  1. Re-examine risk management framework to identify gaps against the regulatory requirements in the AI act and update accordingly. The risk management framework should cover the areas below.
    1. Governance
    2. Data quality
    3. Development and testing
    4. Evaluation and deployment 
    5. Continuous monitoring
  2. Identify the risk level and categorise the AI systems accordingly 
  3. Perform risk assessments (including DPIA’s)
  4. Implement necessary control measures
  5. Monitor and report

 AI Liability Directive
  
On September 28th 2022 the European Commission published a proposal for the AI Liability Directive. The AI Act and the AI Liability Directive supplement each other, they apply at different moments and reinforce each other. While the AI Act aims at preventing damage, the AI Liability Directive lays down a safety-net for compensation in the event of damage. It is noteworthy that the Directive will apply to damage caused by AI systems, irrespective if they are high-risk or not according to AI Act. 
 
More information?
For more information about EU's AI Act or Privacy, please contact Malene Fagerberg or Daniel Tvangsø via the contact details below.

Fandt du dette nyttigt?
$(document.head).append(''); $(document.head).append('