News

EU Artificial Intelligence Act

Legal Alert

On 1 August 2024, the EU Artificial Intelligence Act (The EU AI Act, or the “Act”) entered into force.

Companies, which use artificial intelligence (AI) and intend to continue their operations in the European market must prepare for the fulfillment of new obligations.

The purpose of this Act is to regulate the market of AI systems, in particular those that generate high risk.

The Act applies to all developers of AI systems and models, who are present in the EU market, irrespective of whether they are established within the EU or in a third country.

The updated EU legislation introduces a risk-based classification, differentiating between uses of AI that create:

  • Unacceptable risk. The Act prohibits AI systems generating the risk of this level (for example, social scoring systems and behavioral manipulation systems).
  • High risk. These include biometric identification systems, AI systems used in critical infrastructure and in such field as education, human capital management, public services, etc.
  • Limited risk. These are mostly chatbots and deep fakes systems.
  • Minimal risk. This category includes AI elements, which are integrated into other software products, in particular AI in computer games or spam filters. The operation of such systems are not regulated by the Act, but may be subject to other EU legislation. The Act does not apply to AI systems used in the course of a personal non-professional activity.

In addition, the Act establishes the definition of general-purpose AI systems (General purpose AI or GPAI).

A GPAI system means an AI system which is based on a general-purpose AI model capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications.

Developers of GPAI models and GPAI-based systems must—in addition to complying with copyright laws—make publicly available the technical documentation, instructions for use, a summary of the content used for the training the model.

With respect to some models, developers must conduct evaluations and testing, keep track of and report any serious incidents, and ensure an adequate level of cybersecurity protection.

New obligations

Limited-risk systems are mainly subject to transparency obligations. Developers are required to make humans aware that they communicate or interact with an AI system, and to label all AI-generated content.

Developers of high-risk AI systems must meet are required to:

  • Create a risk management system that will run throughout the entire lifecycle of a high-risk AI system
  • Implement proper data governance and management practices and ensure that datasets for training, validation and testing are relevant, sufficiently representative and, to the best extent possible free of errors and complete in view of the intended purpose of the system
  • Prepare technical documentation to demonstrate that their high-risk AI systems are fully compliant with requirements under applicable legislation; provide national competent authorities with the necessary information to assess the compliance of the AI system with those requirements
  • Ensure that their AI systems should technically allow for the automatic recording of events crucial for identifying national level risksand cases of substantial system modifications
  • Have a quality management system in place to ensure that compliance requirements are met
  • Make available instructions for use so that AI system users also ensure their compliance with legal requirements
  • Design their high risk AI system to allow deployers to implement human oversight
  • Ensure that AI system outputs should have an appropriate level of accuracy, reliability, and cybersecurity

Developers of GPAI models are required to:

  • Prepare technical documentation that includes the process of training, testing and evaluating the results of the model
  • Prepare information and documentation for providers that integrate an AI model into their AI system to give them a good understanding of capabilities and limitations and to enable them to comply with their compliance obligations pursuant to applicable legislation
  • Put in place a policy to comply with copyright law
  • Make publicly available a sufficiently detailed summary about the content used for training of the GPAI model

Timelines

After entry into force, the Act shall apply by the following deadlines:

  • 6 months for prohibited AI systems
  • 12 months for GPAI
  • 24 months for the remaining AI systems, in particular for most AI systems, except those specified below
  • 36 months – for AI systems that are used in the security of industrial production, transport, aviation, medicine, etc.

By then, developers of AI systems and models, which operate in or intend to enter the EU market must ensure they have fulfilled all the above requirements.

 

We continue to monitor how advanced technologies regulation is evolving. Our team will continue to share useful information with you.

Deloitte experts’ comments presented in this review are for informational purposes only and should not be relied upon by taxpayers without a detailed expert analysis of the specific matter.

Subscribe to our Telegram channel "Deloitte Ukraine Voices" to stay tuned on the latest firsthand news, articles, podcasts, and other materials. Hear the voices of our experts!

Return to the previous page: Tax & Legal Alerts 2024

Did you find this useful?