Posted: 05 Apr. 2023 5 min. read

Artificial Intelligence: An updated approach to EU liability legislation

It may become easier for individuals in the European Union who are harmed by AI systems to seek compensation, thanks to two new Directives proposed by the European Commission in September 2022. In conjunction with the proposed EU AI Act (”EUAIA”), the proposed Directive on adapting non-contractual civil liability rules to artificial intelligence (”AI Liability Directive”) and the proposal for the extension of existing EU product liability rules (”Updated Product Liability Directive”) would form a three-part regulatory system for preventing harm and regulating AI in the EU. 

In this article, we introduce the new AI Liability Directive and Updated Product Liability Directive, and propose some foundational steps organisations deploying artificial intelligence can take to prepare for regulatory change.

Figure 1: A three-part regulatory system for AI in the EU, with upstream harm prevention provided by the EUAIA and downstream harm redress provided by the proposed directives.
 

The AI Liability Directive: a two-pronged approach to redressing AI harms

For all the characteristics that make AI an attractive business tool – from cost-effectiveness and operational efficiency, through to its autonomy – the complexity and opacity of AI systems can make them difficult to explain and understand, especially for external stakeholders. Because of this, a person who believes they have been harmed may be unable to demonstrate that an act or omission involving an AI system was the cause of their loss. Consequently, they are likely to struggle to meet the requirements of existing fault-based civil liability laws and ultimately may find it impractical or impossible to bring forward a claim to recover damages. 

The AI Liability Directive aims to address this by reducing the barriers to accessing justice when an AI system may be the cause of the fault. Applying extraterritorially to providers, developers, or users of any AI systems operating within the European Union market, the Directive has two key elements:

1. Provision of evidence 

The first element is that Member States’ national courts would be empowered to make accessing evidence easier for a person seeking to claim against a provider or user of an AI system for alleged harm suffered. 

The proposed powers, which are only intended to be used if a claimant can demonstrate its claim is plausible, and that all proportionate steps have already been taken to obtain relevant evidence first, would:

  • Enable national courts to order the disclosure of information by persons who provide or use high-risk AI systems suspected to have caused damage. 
  • Stipulate that, if a defendant fails to comply with such an order from a national court to provide evidence in a claim for damages, the court shall presume the defendant did not comply with a relevant duty of care that the evidence was requested in relation to. This presumption would be rebuttable by the defendant demonstrating the duty of care was in fact, satisfied.

The European Commission also recognises in its proposal the potential concern that organisations may have about disclosing confidential information and particularly trade secrets and proprietary information. Accordingly, the proposal makes clear that the interests of all parties, including third parties, will be taken into account in determining what information should be disclosed in support of a claim.

2. Rebuttable presumption of a causal link between a failed duty of care and harm caused by the AI system

The second element is that Member States’ national courts would be required to assume that there is a causal link between the fault of the defendant and the harmful output produced by the AI system or the failure to produce a relevant output when:

  • The claimant has demonstrated (or the court has presumed due to a failure to provide evidence) that the defendant has failed in a duty of care set out in EU or national law, which is intended to protect against the harm caused.
  • It is reasonably likely in the circumstances that the failure to satisfy the relevant duty of care has influenced the output produced by the AI system or the failure of the AI system to produce an output.
  • The claimant has demonstrated that the output produced by the AI system is what gave rise to the damage.

The defendant can rebut this presumed cause-and-effect relationship for example by providing evidence that its fault could not have caused the damage. Practically, this reinforces the importance of stringent record-keeping procedures as set out under the EUAIA.

The presumption will not apply to high-risk AI if the defendant can demonstrate that sufficient evidence and expertise are reasonably accessible for the claimant to prove a causal link, and it will only apply to non-High-Risk AI Systems where the court considers it excessively difficult for the claimant to prove the causal link.
 

The Updated Product Liability Directive: updates to bring AI within scope

The current EU product liability regime was enacted to provide a redress mechanism for people who suffer physical injury or damage to property due to products being defective, i.e., not being as safe as the public is entitled to expect. Unlike the proposals set out in the AI Liability Directive, which relate to fault-based claims, the current Product Liability Directive established a no-fault liability regime in order to provide certainty as to who is responsible in the event harm is caused by a defective product. Nonetheless, the burden of proof is still generally on the injured person to prove the damage they have suffered, the defectiveness of the product and the causal link between the two. It generally makes the manufacturer of a product liable for product defects, or if a product is imported into the EU, then the importer is responsible.

The changes proposed for the Updated Product Liability Directive would bring AI systems within the scope of the product liability regime by:

  • Confirming that AI systems and AI-enabled goods and services fall within the definition of products for the purpose of applying this legislation. If defective AI causes damage, compensation is available without the injured person having to prove the manufacturer’s fault.
  • Recognising that it is not only hardware manufacturers but also software providers and providers of digital services that affect how an AI-related product works and that they can also be held liable.
  • Making it clear that the responsible person can be liable for changes they make to products they have already placed on the market, including when those changes are triggered by software updates or machine learning.
  • Establishing that if a product becomes defective because of a lack of software updates or upgrades which are necessary to maintain safety, it will not exempt the responsible person from liability.

In addition, the changes to the Product Liability Directive would further strengthen the regulatory coverage of Artificial Intelligence by:

  • Broadening the definition of damages that can be suffered as a result of defective AI products to include loss of data that is not used exclusively for professional purposes.
  • Alleviating the burden of proof in certain cases such as, where due to technical complexity, the claimant faces excessive difficulties in proving the defectiveness of the product and/or the causal link between its defectiveness and the damage. This is primarily achieved by allowing certain presumptions, which the defendant can then seek to rebut.
     

Foundational steps to prepare for regulatory change

These proposed directives are likely to evolve as they make their way through the EU legislative process before ultimately needing to be reflected in the national law of EU Member States. However, they already make clear the increasing expectations on, and likely liability of, developers, providers, users, manufacturers, and importers of AI systems. Organisations should be taking a proactive approach now in order to prepare for future regulatory requirements, and could begin by considering the following:

Focus on Documentation

The AI Liability Directive and the Updated Product Liability Directive emphasise the importance of organisations having robust risk management frameworks that require accurate, timely, and comprehensive data and documentation to be maintained in relation to their AI systems. Failure to do so is likely to make defending claims more difficult and in organisations having to incur further costs and time spent in order to rebut unfavourable presumptions.

Systematise record-keeping processes

Organisations operating high-risk or non-high-risk AI systems should consider the appropriateness of their record-keeping systems and data management to ensure that they are able to comply with the requirements of the EUAIA and respond to requests for disclosure or provide evidence of compliance, should requests arrive. 

Maintain an accurate inventory

A robust inventory of all AI systems operated by organisations will also become crucial as algorithmic, and AI systems become more widespread. Many companies may not even be aware that they perform the kinds of activities and deploy the kinds of systems that fall within the EU’s broad definition for AI systems and that are the focus of the EUAIA and AI Liability Directive. Without a strong understanding of where AI is used throughout operations, organisations cannot expect to ensure compliance with regulatory and legislative requirements.
 

Final thoughts

In conjunction with the proposed EUAIA, these proposed draft directives represent substantial regulatory changes for AI in the EU through a wide-reaching combination of upstream harm prevention and downstream harm redress.

Amidst so much change, organisations that take the initiative to regulatory preparedness can continue to create, innovate, and execute with confidence. 

To understand more about the implications of the AI Liability Directive, the Updated Product Liability Directive, the EUAIA and more broadly how to prepare for upcoming AI regulation, please do get in touch.

Key contacts

Mark Cankett

Mark Cankett

Partner

Mark is a Partner in our Regulatory Assurance team. He is our AI Assurance, Internet Regulation and Global Algorithm Assurance Leader with 20 years of experience across financial services audit and assurance, regulatory compliance, regulatory investigations and disputes. He has led the development of our assurance practice as it relates to our approach to assisting firms gain confidence over their algorithmic and AI systems and processes. He has a particular sub-sector specialism in the area of algorithmic trading with varied experience supporting firms enhance their governance and control environments, as well as investigate and validate such systems. More recently he has supported and led our work across a number of emerging AI assurance related engagements.

Andrew Joint

Andrew Joint

Partner

He is a technology and outsourcing lawyer, with a particular specialism in cloud, digital and business transformation, AI, Robotic Process Automation, IoT, system integration and complex technology contracting and advising. He has advised many of the largest and leading technology and outsourcing suppliers and their customers on their provision, procurement and use of the most advanced tools and services. He has particular experience in advising in regulated sectors including financials services, healthcare and the public sector. He has previously been described in Chambers & Partners as “Brilliant“ and a lawyer who is "appreciated by clients for his understanding of technology and knowledge of the market". He is ranked as a leading lawyer in both the Legal 500 and Chambers & Partners.

Louis Wihl

Louis Wihl

Director

Louis is a technology and commercial contracts lawyer with over 12 years’ experience advising a range of both customer and supplier clients, from early-stage organisations to household names and listed companies. He leads on the drafting and negotiation of a wide range of high value, business critical and strategic contracts, often in the context of digital transformation projects and regulated outsourcings. These include agreements for software-as-a-service, platform-as-a-service, infrastructure-as-a-service, on-premises software licences, systems integration services, IT outsourcing, business process outsourcing and other IT and cloud-based technology solutions. Louis also advises on the use of emerging technologies and is Deloitte Legal’s UK Artificial Intelligence Legal Advisory Lead. In this role he brings together the best of Deloitte’s AI legal knowledge together with Deloitte’s business, technology and sector expertise to provide comprehensive solutions to the challenges AI poses, enabling organisations to make the most of the opportunities AI offers.

Key Contacts

Barry Liddy

Barry Liddy

Director

Barry is a Director at Deloitte UK, where he leads our Algorithm, AI and Internet Regulation Assurance team. He is a recognised Subject Matter Expert (SME) in AI regulation, has a proven track record of guiding clients in strengthening their AI control frameworks to align with industry best practices and regulatory expectations. Barry’s expertise extends to Generative AI where he supports firms safely adopt this technology and navigate the risks associated with these complex foundation models. Barry also leads our Digital Services Act (DSA) & Digital Markets Act (DMA) audit team providing Independent Assurance over designated online platforms' compliance with these Internet Regulations. As part of this role, Barry oversees our firm's assessment of controls encompassing crucial areas such as consumer profiling techniques, recommender systems, and content moderation algorithms. Barry’s team also specialises in algorithmic trading risks and controls and he has led several projects focused on ensuring compliance with relevant regulations in this space.

Tom Wood

Tom Wood

Assistant Manager

Tom is an Assistant Manager in Deloitte’s Algorithm & AI Assurance practice, based in London and part of the Banking and Capital Markets Group. He’s workstream lead in the delivery of a major Digital Transformation partnership, as well as working to support multinational financial institutions and GSIBs in their navigation of complex AI risks and the European regulatory landscape.