zEU Artificial Intelligence Act Deep Dive

Perspectives

EU Artificial Intelligence Act Deep Dive

Details and background on the implementation requirements

The aim of the AI Act is to improve the functioning of the European single market and promote the uptake of human-centred and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety and fundamental rights enshrined in the Charter of Fundamental Rights – including democracy, the rule of law and environmental protection – against possible harmful effects of AI systems. Thus, the AI Act is also a product safety regulation. It aims to protect European consumers from fundamental rights violations resulting from inappropriate use of AI. In the future, providers of AI systems classified as high-risk will have to verify and formally confirm compliance with numerous requirements in line with the principles of trustworthy AI – from AI governance to AI quality.

Violations of these requirements are subject to severe fines. In addition, vendors may be forced to withdraw their AI systems from the market. Despite extensive principles, rules and procedures as well as new supervisory structures, the law is not intended to slow down innovation in the EU, but to promote further development in the AI sector, especially by start-ups and SMEs, through legal certainty and regulatory sandboxes. For a detailed overview of the AI Act and its implications, we invite you to read our publication, EU Artificial Intelligence Act Deep Dive.

Get in touch

  • Simina Mut, Partner, Deloitte Legal Central Europe Leader
  • Gregor Strojin, Deloitte Legal Central Europe AI Regulatory CoE Leader