Unpacking the EU AI Act: The future of AI governance has been saved
Perspectives
Unpacking the EU AI Act: The future of AI governance
Understanding compliance requirements and strategic insights
The European Union (EU) Artificial Intelligence (AI) Act aims to facilitate the safety and fundamental rights of people and businesses while fostering AI innovation and adoption within the EU.1 The landmark legislation draws a lot of focus due to its onerous obligations, large scope, and impact across industries. Along with the EU’s recently applied Digital Services Act (DSA), the EU AI Act is part of a broader European approach to balancing innovation and digital transformation with ethical considerations and user safety. The AI Act formally went into effect on August 1st, 2024, and will be fully applicable 2 years later with some exceptions.2
The Act adopts a risk-based approach in regulating AI, which means it sets progressively increasing restrictions based on the level of risk associated with different uses of AI, such as music and entertainment, technology, health care, education, and manufacturing. The EU AI Act classifies AI systems into four categories based on their potential risk to rights and safety, which includes Unacceptable Risk, High Risk, Limited Risk, and Minimal Risk:
- The Act forbids applications deemed to carry unacceptable risk levels, including systems that manipulate human behavior and classify people based on social behavior, as well as some uses of real-time remote biometric identification systems.
- High-risk systems, such as those used in safety features in aviation, cars, medical devices, and critical infrastructure management, will require specified compliance with regulatory requirements before they can be deployed.
- AI applications that pose limited risks, such as chatbots and AI-generated content like deepfakes, are required to meet specific transparency obligations. This ensures that users are aware that they are interacting with an AI system.
- Although encouraged to adhere to voluntary codes of conduct, the vast majority of AI applications, such as AI-driven video games and AI-enabled virtual assistants, fall under the minimal risk category and can operate with minimal regulatory constraints.
The Act specifies compliance requirements for permitted uses, which vary depending on the risk classification of the AI system. These requirements focus on governance, technical documentation, human oversight, risk management, and transparency to determine AI systems’ accuracy, robustness, and security. Additionally, for entities outside of the EU, providers will be required to appoint an authorized representative established in the EU to give EU authorities access to someone with the required information on compliance of their AI systems before placing a high-risk system or general-purpose AI model in the EU market.
Noncompliance with the AI Act carries significant penalties including, but not limited to:
- Infringements related to prohibited AI systems can lead to fines of up to EUR 35 million or 7% of global annual turnover, whichever is higher. Lesser infringements might result in fines up to EUR 15 million or 3% of annual turnover.
- Incorrect reporting can attract penalties up to EUR 7.5 million or 1% of turnover.
- Beyond monetary fines, authorities can also mandate the withdrawal of noncompliant AI systems from the market.
Wondering what you can do to prepare?
The EU AI Act is perceived as a pivotal regulatory benchmark and is expected to set a precedent for comparable regulations worldwide. Notably, regulatory proposals have already taken shape in other jurisdictions, accompanied by the publication of various frameworks. Many of these measures share overlapping principles and themes, reinforcing the global trend toward stricter AI governance.
Contacts
Tim Davis |
Tanneasha Gordon |
Jennifer McMillan |
Oriane Dalmeida |
End Notes:
1Tammy Whitehouse, “How EU AI act may accelerate compliance regime for US. enterprises,” Deloitte Risk & Compliance Journal for the Wall Street Journal, February 13, 2024.
2AI Act | Shaping Europe’s digital future (europa.eu)
Recommendations
State of Generative AI in the Enterprise 2024
Explore the Deloitte AI Institute’s quarterly report tracking Generative AI adoption trends, their impacts on business, and challenges throughout 2024.
Regulatory, Risk and Program Compliance
Program design and strategy