Posted: 03 Apr. 2024 5 min. read

The EU AI Act: the finish line is in sight

At a glance

  • The AI Act (AIA) has been given the green light by the EU Parliament, paving the way for it to officially become law by June 2024, followed by a two-year phased implementation period for organisations.
  • The AIA is a comprehensive and legally binding cross-sector framework applicable to organisations using AI in the EU. The AIA proposes a prescriptive but risk-based approach to regulate single purpose AI systems and General Purpose AI (GPAI) systems and models.
  • Single-purpose AI systems posing unacceptable risks to individuals' fundamental rights, health, or safety, or to society, will be banned. High-risk AI systems will be permitted, but their operators will face strict requirements before placement in the market or use.
  • All GPAI models and systems will be subject to transparency requirements to ensure fair allocation of responsibilities along the AI value chain. High-impact GPAI models posing systemic risks will face additional stricter obligations.
  • The AIA will have significant extraterritorial implications, as it will apply to organisations in other jurisdictions if they market or deploy their AI systems in the EU. This raises strategic questions for multinational firms, who must choose between adopting AIA standards globally, using EU-specific AI systems in the EU, or scaling back AI use in the EU.
  • While the final legal text of the AIA is not yet available, the EU Parliament has published the version it approved on 13th March.  We don’t expect material changes to this version. This publication will enable firms to conduct an initial strategic and operational impact assessment. However, crucial details for interpreting and implementing the AIA will only emerge in the secondary legislation to be published in due course.
  • As a starting point, organisations should assess which of their current and planned AI systems and models fall in scope of the AIA and conduct a gap analysis against key requirements. This will provide insight into the scale and challenge of compliance efforts and help identify the impact of the AIA on strategic choices and product governance.

Overview

On 13th March 2024, the EU Parliament voted overwhelmingly to adopt the AIA, marking another significant step towards the EU’s ambition to “become a global leader in trustworthy AI”.  The AIA represents the first comprehensive and legally binding cross-sector framework for AI, including General Purpose AI (GPAI), from a major global economy. It sets out a risk-based, prescriptive approach focussing on the potential risks arising from specific models and applications.

The definitive legal text of the AIA is not yet available, but it is expected to be published in the EU Official Journal (OJ) in May, after final lawyer-linguistic checks and formal endorsement by the EU Council. In the interim, the version endorsed by the Parliament can be considered essentially final, and provides further clarity in relation to crucial components and details concerning banned AI practices, obligations for high-risk AI systems, and the overall approach to regulating GPAI, amongst other matters. 

Most importantly, now that the Parliament’s support has been secured, the last major hurdle to the AIA becoming law in the first half of 2024 has been cleared.

Please note: This article does not cover the regulation of AI used i) by public or law enforcement authorities, ii) as safety products or components, or iii) in industries subject to harmonised EU law (e.g., boats, motor vehicles, rail, aircraft, etc).

EU AI Act: a timeline

Since the proposal was first unveiled in April 2021, prolonged negotiations have taken place to secure a political agreement, which was reached in December 2023. The agreed political text is now in the process of being converted into the final legislative text, with the final formal approvals by both Parliament and Council set to conclude in April 2024. The AIA will "go live" 20 days after its publication in the EU Official Journal (OJ), currently expected to be in May 20242.

Organisations will have two years to comply with the AIA's provisions before they become fully enforceable by the first half of 2026. However, a limited number of provisions will apply sooner. Bans on prohibited AI systems will apply six months after the AIA enters into force, while requirements for GPAI systems and models will apply 12 months after.3

Figure 1 – AI Act timeline

During the implementation period, the EU Commission will develop and adopt secondary legislation and guidance to provide more granular rules and instructions on what organisations must do to be deemed compliant with the AIA. At the behest of the Commission, the European Standardisation Organisations (ESOs) will also develop several standards – known as ‘harmonised standards’. Although harmonised standards are industry-led, once adopted by the EU Commission and published in the OJ, conformity with them will provide a presumption of compliance with the relevant obligations of the EU AI Act.

Definition of AI systems

The AIA’s definition of an AI system broadly aligns with that of the Organization for Economic Cooperation and Development (OECD)4, to ensure legal certainty and facilitate international alignment. The EU believes that this will provide sufficient and clear criteria for differentiating AI systems from simpler software systems, ensuring a proportionate regulatory approach.

AIA definition of an AI system

“An AI system is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.

Yet, in our view the AIA’s definition of an AI system is still very broad. It could potentially include decision-making software with inference capabilities that have been in use for decades – such as standard credit scoring models in financial services. In the absence of more detailed guidance, such a broad definition could lead to divergent interpretations of what falls outside the scope of the AIA among National Competent Authorities (NCAs), creating inconsistencies or loopholes in its application across the EU.

Similarly, while the agreement exempts AI systems developed for the sole purpose of scientific research and development activities, it does not seem to provide an exact definition of these terms. Additional clarity, through supplementary guidance and secondary legislation, will be crucial, including whether commercial research will be covered and under what conditions. This will help avoid regulatory uncertainty, as seen in the definition of scientific research in the General Data Protection Regulation (GDPR), and support investment in AI that aligns with public policy objectives.

What we do know is that the AIA will differentiate between single purpose and GPAI systems. Single purpose AI systems – or simply "AI systems” – are designed for specific tasks. By contrast, GPAI systems can service a wider range of tasks and are often integrated into downstream AI systems5. Large Language Models, which serve as the foundation for many generative AI systems, are an example of GPAI systems.

Identifying key actors in the AI value chain

Given that AI systems are developed and distributed through intricate value chains, the AIA assigns clear roles and responsibilities to the various actors involved. These include providers, importers, distributors, and deployers, collectively referred to as AI operators (see Figure 3). However, organisations may assume multiple roles within the chain. This article, and the AIA itself, focuses on the obligations of the two primary actors in the chain: providers and deployers.

Figure 2 – Key actors in the AI value chain

AI systems: risk-based classification and regulation

The AIA classifies AI systems based on their potential risk to individuals’ fundamental rights, health, or safety, as well as the society as a whole. The AIA will completely ban a limited number of AI applications due to the unacceptable risk they pose. However, most of the legislation focuses on high-risk AI systems, such as those used in areas of employment, education, and access to essential private services.

Figure 3 – AI systems classification

Although high-risk AI systems will be permitted, they will be subject to strict conditions. To minimise potential risks, providers and deployers must adhere to a stringent set of standards.

Figure 4 – High-level view of AI systems key requirements
 

Complying with requirements for high-risk AI systems will, for most organisations, require significant investment to put in place enhanced product governance, risk management frameworks, compliance, and internal audit capabilities for conformity assessments. Providers will be responsible for fulfilling some of the most challenging requirements of the AIA, including conducting a Conformity Assessment and registering high-risk AI systems in a new EU database before putting any high-risk AI system on the market. For some specific use cases, independent external audits for conformity assessments by so-called "notified bodies"6 will be required.

Fundamental Rights Impact Assessments

However, as part of the agreement, the EU confirmed that certain deployers of high-risk AI systems, such as public bodies, private operators providing public services, and financial services firms will have to conduct a Fundamental Rights Impact Assessment (FRIA) before use. The FRIA is a comprehensive process that evaluates the potential impact of AI on fundamental rights such as privacy, non-discrimination, and freedom of expression. The results must inform risk management strategies to ensure compliance and respect of fundamental rights.

Conducting a FRIA will be a complex task – from defining the scope of the assessment to accessing and analysing information related to AI system design and development. In many cases, FRIAs will also intersect with similar requirements under other applicable regulations, such as GDPR Data Protection Impact Assessments. Many organisations may lack the expertise to conduct FRIAs, including knowledge of fundamental rights, how to balance potential benefits and risks to individuals, and how to access or assess quantitative and qualitative information about their AI systems across the value chain.

Proportionality measures

To support innovation, the AIA includes specific provisions for Small Medium-sized Enterprises (SMEs), as well as broader proportionality measures. For example, Member States will have to establish appropriate channels to provide guidance and respond to SMEs’ queries about AIA implementation, should such channels not already exist. The AIA also includes a series of filtering conditions to ensure that only genuine high-risk applications are captured. For example, AI systems that are designed to perform narrow procedural tasks or enhance the outcome of a task previously executed by humans will not be categorised as high-risk. Several significant exemptions also apply to AI systems and models provided under free and open-source licences, and those that were put into service before the entry into force of the AIA.

Extraterritoriality

The AIA will have implications for organisations around the world. The AIA will apply not only to EU AI providers and developers, but also to those located in other jurisdictions – such as the UK and US – if their AI systems affect individuals residing in the EU. This extraterritorial impact has led some to compare the AI Act to the GDPR in its likely impact.

Multinational firms will have to decide whether to adopt AIA standards globally or to adopt EU-specific AI systems, or in some scenarios, whether to scale back use of higher-risk AI in the EU. For example, if an organisation adopts a high-risk solution developed outside the EU and deploys it in an EU entity or in a manner which affects individuals residing in the EU, the full scope of the requirements will need to be complied with, including conformity assessment tests and registering in the EU database if substantial modifications are made.

At a minimum, firms should start by assessing which of their current and planned AI systems are likely to fall into the AI definition of the Act and, of those, which are high-risk or prohibited. This will enable a high-level gap analysis against the key requirements, providing insight into the scale and challenge of any compliance efforts required, including required enhancement to their risk management frameworks. Lower risk AI solutions also entail compliance with certain transparency requirements.

General Purpose AI models and systems

One of the thorniest issues of the negotiation was the classification and regulation of GPAI models and systems and ensuring fair allocation of responsibilities across the value chain. The compromise agreement reached by the EU institutions involves a tiered approach, where a provider’s GPAI models and systems are regulated based on the level of risk their products pose.

Figure 5 – GPAI classification and key requirements for providers

The EU will establish an AI Office within the EU Commission to oversee GPAI models, enforce common rules, and develop secondary legislation. A scientific panel of independent experts will advise the AI Office on evaluating GPAI models, including capabilities, high-impact designations, and safety risks.

While the more nuanced approach to regulating GPAI is welcome, it remains unclear whether it can balance AI safety with innovation and growth. Both definitions and procedures for GPAI designation in the AIA text remain high-level and will only be clarified in secondary legislation. We do know that the key threshold for high-impact GPAI model designation is based on the amount of computing used in training, and it is set at Floating Point Operations per Second (FLOPs) > 10~25. Yet, the EU recognises the possibility of needing to update the FLOPs threshold by the time the AIA becomes applicable and has granted the Commission the authority to do so. Additionally, the Commission will have the power to consider other quantitative and qualitative criteria, such as the number of business users, when evaluating high-impact GPAI models.

Until harmonised standards are published, high-impact GPAI models that pose systemic risks can comply with the AIA by adhering to Codes of Practice approved by the Commission. The Codes of Practice will be developed in collaboration with industry, the scientific community, civil society, and other stakeholders7. The Codes of Practice should be available at least three months before the application date of the GPAI provisions.

The overall strategic and compliance implications of the proposed requirements for GPAI providers are likely to be substantial. While regulation can help providers to demonstrate that their products are trustworthy and reliable, compliance will demand significant effort and investment. A preliminary study from Stanford University suggests that that all major providers would fall well short of most, if not all, of the draft AIA requirements that were initially proposed by the EU Parliament8. According to the study, the most significant shortfalls concern copyrighted data, transparency, testing and evaluation, and data governance.

Providers will need to invest in strengthening their capabilities to assess GPAI model risks, including reviewing their approaches to testing, evaluation, and risk mitigation. Enhanced data governance will be crucial, requiring improved methods for data collection, storage, and lawful use. Investing in increased transparency and reporting capabilities will also be inevitable.

AI sandboxes

While we have discussed some of the main aspects of the AIA, there are many other provisions into which we have not delved. One example are the measures aimed at promoting innovation among SMEs and start-ups.

These include encouraging individual Member States to establish regulatory sandboxes, according to a common set of rules to promote standardised approaches across the EU and facilitate cooperation between NCAs. SMEs and start-ups will have priority access to the sandboxes, with the aim of removing some of the barriers they may face when launching their products.

Spain is playing a leading role in this space by launching the first AI Regulatory Sandbox pilot. The initiative aims to operationalise AIA requirements, including conformity assessments or post-market monitoring activities. As part the pilot, the Spanish government is developing technical guidelines for high-risk AI systems, policies, and procedures that will serve as a framework for the Regulatory Sandbox. Other EU countries are likely to follow suit over the next two years to support growth of their own AI sectors, while the Commission will facilitate cooperation at EU level through its AI and Digitalisation of Businesses Expert Groups

Missing pieces of the puzzle

The proposed measures in the AIA will have a far-reaching impact on firms and their AI innovation strategies, both in the EU and globally. Like the GDPR, the AI Act has a cross-sector remit and imposes hefty fines, with penalties of up to 7% of global turnover or €35 million for the most significant infringements.

Even more significantly, organisations may need to cease deploying certain AI systems or make significant product changes to comply with the AIA requirements.

However, while the impending finalisation of the AIA is a significant milestone, it is only one piece of the puzzle in terms of the regulatory landscape for organisations developing or deploying AI systems.

Technical standards

Organisations will have to wait for secondary legislation and harmonised standards to emerge between the AIA's entry into force in H1 2024 and the end of the implementation period in H1 2026 before finalising their compliance plans fully. For example, harmonised standards will cover critical elements of the legislation such as risk management, data quality, accuracy, transparency, robustness, and human oversight. This underscores the importance of organisations preparing in advance for compliance, as they will have a narrow window to ensure alignment with technical specifications and complete their conformity assessments.

Interaction with other technology-neutral EU regulatory frameworks

Technology-neutral cross-sector regulations, such as GDPR, and sector-specific regulations, such as those governing financial services or digital markets, will be applicable depending on the specific AI use case. However, the interaction between these regulations and the AIA raises some questions that remain unanswered for now. For example, we have already highlighted the potential interplay between FRIAs and GDPR as a potential challenge. In addition, the responsibilities of different actors in the AI value chain may not always align with those of the organisation that is primarily responsibility for protecting personal data under GDPR, i.e., the data controller.

Another example is the link between the AIA and the Digital Services Act (DSA). To ensure a coordinated approach to regulating the digital landscape, the AIA indicates that Very Large Online Platforms (VLOPs) that comply with the DSA may be also considered compliant with selected AIA requirements – especially in relation to risk management – by default.

These interactions raise important questions around cooperation and alignment between NCAs responsible for the AIA and those responsible for other horizontal and sector-level regulations.

Conclusion

The key elements of the AIA are now known and firmly in place. Organisations looking to automate in low-risk areas, such as simple chatbots, now have clarity to scale up with confidence. The overall AIA risk-based classification of AI systems will be helpful in determining the greater robustness required around more complex “black box” models. Organisations now need to develop their overall AI strategy, refining it as more details emerge in the run up to H1 2024 and through the implementation phase.

The impact of the EU AIA in shaping global AI regulations will depend on the approaches adopted by UK, US and other key global regulators. We already see a degree of divergence in detail between countries which will be challenging for organisations to navigate. Within the EU itself, diverging national level interpretations may also add a further layer of complexity. Maintaining a comprehensive horizon scanning capability which feeds into the AI strategy will be key to deploying trustworthy and compliant AI systems.

 

[1] https://ec.europa.eu/commission/presscorner/detail/en/speech_21_1866

[2] Timeframes are protracted due to the time required to translate the text in all the 24 EU official languages.

[3] https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6473

[4] https://oecd.ai/en/ai-principles

[5] AI systems that are built using other AI systems or components.

[6] Independent third parties designated by EU Member States.

[7] https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6473

[8] https://crfm.stanford.edu/2023/06/15/eu-ai-act.html

Key contacts

Suchitra Nair

Suchitra Nair

Partner

Suchitra is a Partner in the EMEA Centre for Regulatory Strategy and helps our clients to navigate the regulatory landscape around technological innovation. She sits on the UK Fintech Executive and leads our thought leadership on topics such as digitsation, cryptoassets, AI, regulatory sandboxes, Suptech, payment innovation and the future of regulation. She recently completed a secondment at the Bank of England, supervising digital challenger banks. Suchitra is a member of various industry working groups on innovation in financial services and has regularly featured in the Top 150 Women in Fintech Powerlist (Innovate Finance). She is a qualified Chartered Accountant and has previously worked in Deloitte’s Audit, Corporate Finance and Risk Advisory teams, where she led large-scale regulatory change projects.

Valeria Gallo

Valeria Gallo

Senior Manager

Valeria is a Senior Manager in the EMEA Centre for Regulatory Strategy. Her focus is on regulatory initiatives related to payments and FinTech. Valeria joined Deloitte in early 2012 from a global strategy consulting firm where she was the Business Operations Manager for the European financial services practice.

Robert MacDougall

Robert MacDougall

Director

Robert is a Director in Deloitte's EMEA Centre for Regulatory Strategy, where he leads the Centre’s work on regulation in Digital Markets. Prior to joining Deloitte, Robert spent eleven years at Vodafone Group, setting Group policy positions across a wide variety of regulatory initiatives relevant to the promotion of competition and protection of consumers in digital markets. Robert has over a decade's experience working at regulatory bodies relevant to the sector, spending eight years at Ofcom (and its predecessor Oftel) and four years at the UK's competition and consumer protection authority. This included a secondment to the US Federal Trade Commission working on technology topics in the FTC's Bureau of Consumer Protection.

Key contacts

Louis Wihl

Louis Wihl

Director

Louis is a technology and commercial contracts lawyer with over 12 years’ experience advising a range of both customer and supplier clients, from early-stage organisations to household names and listed companies. He leads on the drafting and negotiation of a wide range of high value, business critical and strategic contracts, often in the context of digital transformation projects and regulated outsourcings. These include agreements for software-as-a-service, platform-as-a-service, infrastructure-as-a-service, on-premises software licences, systems integration services, IT outsourcing, business process outsourcing and other IT and cloud-based technology solutions. Louis also advises on the use of emerging technologies and is Deloitte Legal’s UK Artificial Intelligence Legal Advisory Lead. In this role he brings together the best of Deloitte’s AI legal knowledge together with Deloitte’s business, technology and sector expertise to provide comprehensive solutions to the challenges AI poses, enabling organisations to make the most of the opportunities AI offers.

Lewis Keating

Lewis Keating

Director

Lewis is a Director in our Risk Advisory practice with more than 10 years experience helping organisations of all size with Technology Risks and Controls. He has led several assurance and advisory projects on AI Risk and AI Governance and is helping organisations work through how to manage the widespread adoption of AI in a safe and controlled manner. Lewis leads several Internal Audit reviews on the topic of AI Risk, and his areas of interest and expertise include IT Governance, AI Risk, ML Governance, IT Strategy and IT Risk Management.

Lucia Lucchini

Lucia Lucchini

Senior Manager

Lucia is a Senior Manager in our Cyber, Data and Digital practice within Deloitte Risk Advisory. Her experience ranges from privacy & data protection to the intersection between privacy & ethics in new technologies, specifically AI. Lucia focuses on the changing regulatory landscape surrounding new technologies, with particular attention to AI governance and policy. Lucia is also part of the of the Research & Innovation team, specializing in conducting cyber-related research.