As the EU evolves its policy on regulating Artificial Intelligence, product liability issues arising out of the use of products incorporating AI systems have been tackled as a priority, not only in the proposed AI Act itself, but in proposals to:
- amend the strict liability regime under the Product Liability Directive (85/374/EEC) (PLD), and
- introduce a new, first of its kind, AI Liability Directive for non-contractual, civil liability.
These proposals set out how the EU intends to legislate for liability risks in AI products and provide recourse for consumers in the event these cause harm. However, they have also raised concerns given the potential significance of some of the changes proposed.
Stakeholder views in response to the proposals have been mixed. This is unsurprising given the balance that needs to be struck between modernising the product liability regime to adequately protect consumers from the risks of AI, and stifling innovation. It seems clear that a number of the proposals will greatly assist consumers in bringing more claims against technology companies and increase exposure for those supplying AI products and systems in the EU.
Who and what product range will be covered by the scope of the coming proposals?
The draft AI Act is working its way through the legislative process and trilogues are likely to start shortly. The definitions are very much up for debate, but as currently drafted, the Regulation applies to all providers of AI systems that place them on the market or operate them in the EU as well as providers that use the results of AI systems in the EU and respective users of AI systems. AI systems in the context of the AI Act are intended to include the following modes of operation (but are not limited to these):
- machine learning approaches, supervised or unsupervised, including Deep Learning and artificial neural networks (an artificial neural network that mimics the neurons in a biological brain and is supposed to lead to better pattern recognition, competence, and general learning)
- logic and knowledge-based systems (generating hypotheses by logically combining large sample data sets)
- statistical approaches (finding relationships and dependencies between data sets through probability calculations).
AI systems are classed into risk categories with separate obligations and levels of scrutiny. AI systems that pose an unacceptable risk, for example, will be banned. These include systems that can harm people through subliminal influence, as well as those that actively classify people and treat them differently according to their personality or social behaviour (eg social scoring). High-risk systems may only be used subject to strict compliance requirements. This applies, for example, to systems used for remote biometric identification, securing critical infrastructure, decision-making in human resources management, creditworthiness evaluation, and risk assessment in criminal prosecution. Systems that pose little or minimal risk, must only meet certain transparency requirements.
The AI Act is designed to set the safety standards and regulatory regime for placing products incorporating AI systems onto the market. This is like any other product safety legislation focussing on minimising risks and preventing damage.
Who is liable in the event of a defective AI system?
In the event damage occurs, the amendments to the PLD are designed to expand the strict liability framework to products incorporating AI systems, allowing compensation for damage when products like robots, drones or smart-home systems are made unsafe by software updates, AI or digital services required to operate the product. Damage includes material losses due to loss of life, damage to health or property and data loss.
The new PLD will also increase the list of potentially liable defendants. The manufacturer, own-brander and/or importer into the EU will remain liable on a joint and several basis as is currently the position, but the legislation also introduces new potential defendants. For example, those entities which substantially modify a product once placed on the market might now be liable. Further, where the manufacturer of a defective product is based outside the EU, the importer of the defective product and any Authorised Representative of the manufacturer can be held liable for damage caused by that product. This extends the parties potentially liable for defective products to include Authorised Representatives of non-EU businesses and fulfilment service providers (ie warehouse, packing and postage providers).
The new AI Liability Directive sets out a targeted reform of national fault-based liability regimes and will apply to claims against any person for fault that influenced the AI system which caused the damage such as software developers, users or AI providers. It covers any type of damage under national law (eg for harm not covered by the PLD such as infringements of fundamental rights or claims against users of products rather than against the manufacturer).
How do the proposals assist those claiming damages relating to AI systems?
The aim of the proposals is to shift difficult evidentiary problems regarding the culpability of AI systems from the consumer and on to the manufacturer, AI provider or operator.
As a result, there are plans to require AI system operators to disclose relevant evidence on a claimant's request in the event that a high-risk system (under the AI Act) is suspected of causing damage. If such a request is not complied with after a court order, the burden of proof is reversed, and there will then be a rebuttable presumption that the operator breached its duty of care.
A reversal of the burden of proof with regard to liability also applies if, for example, the relevant cyber security guarantees were not adhered to during the development and operation of the AI system, or where the system was trained with non-qualitative data sets.
This is a significant step change for tech companies placing AI products on the EU market and will likely make evidence gathering easier for claimants by allowing them to seek a court order requiring disclosure of relevant records. The proposals do recognise that appropriate safeguards for the protection of sensitive information and trade secrets should also be provided, but no further clarity is given on this.
What are stakeholders saying?
Such far-reaching proposals to amend the European product liability regime are naturally facing scrutiny from a wide variety of stakeholders.
- Technology companies and computer industry associations have raised concerns about the inclusion of standalone software and AI systems in the scope of the revised PLD. This is because the concept of strict liability does not fit neatly with complex software and AI supply chains where unknown vulnerabilities can emerge. There is also debate as to whether we have sufficient evidence right now to justify these changes and particularly the specific obligations for AI. There have been very few cases involving damage caused by AI and therefore the application of existing rules to AI is largely untested. Arguably, new liability rules should be aimed at targeting high-risk use cases.
- Concerns have been raised around some of the proposed new defendants, including that the definition of "economic operator" which replaces "producer" is too broad. It covers a number of different parties in the supply chain going beyond the manufacturing regime, extending to authorised representatives, fulfilment service providers and, in certain circumstances, online marketplaces.
- There are concerns about the lack of clarity in the PLD on how this new legislation will interact with existing laws relating to the same or similar damage. For example, data loss and psychological damage, and whether this will be linked to the pain caused by a bodily injury rather than stand-alone non-material harm.
- Businesses have pointed to major issues around the degree of legal certainty under the AI Liability Directive, not least because the AI Act to which it is related, has not yet been agreed. For example, there is no definition of “fault”, which is likely to lead to inconsistent approaches by national courts. Similarly, the scope of “claims for damages” might be interpreted differently. The inclusion of any and all non-material damages may be a catalyst for limitless and unfair claims against providers. Other stakeholders, however, such as the Federation of German Consumer Organisations, have welcomed the inclusion of non-material damages, and are pushing for a more extensive liability regime to include strict liability.
- Regarding the disclosure of evidence on request, the German Banking Industry Committee (among others) has argued that such legal procedures are alien to German law as well as to other European legal systems. The disclosure framework may leave too much room for interpretation and be overly burdensome to companies while also leaving them exposed to unfair and abusive claims. It also poses potential risks to trade secrets and confidentiality. Bitkom, an industry association of the German information and telecommunications sector, argued that it will be unclear to an affected party whether they can submit a request to disclose evidence as they would need to know whether the AI system in question is classified as “high risk”. According to them, stricter standards should apply in evaluating the admissibility of such a request in order to conserve the principle of minimal invasiveness. Other stakeholders have argued for further strengthening of the courts' powers to conduct far-reaching audits on AI systems. On the other hand, some have argued for lower thresholds for consumers, and a more practical approach for submitting evidence disclosure requests.
- While the proposals are not intended to reverse the burden of proof and it remains for a claimant to prove their case, the net effect of the presumption of defectiveness and causality arguably amounts to this in respect of those products which are highly technical or scientifically complex. Businesses argue this shift of the burden of proof is unwarranted as AI systems often work in unpredictable ways without the provider necessarily being negligent, and it may serve to stifle innovation. Consumer associations take a different view, arguing manufacturers and providers of AI systems are better placed to gather evidence in respect of highly complex and opaque AI systems. Further, claimants still need to show the output produced by the AI system or its failure to do so caused the damage, and the liability presumptions will be rebuttable.
Next steps?
It will be some time before the entire set of AI law regulations comes into force and the Directives are transposed into national law. The proposed amendments to the PLD and newly proposed AI Liability Directive are currently being presented to the European Parliament and so we wait to see the final form of the new regime which is likely come into force in 2024-2025.
European governments have generally published strategic goals and objectives in respect of AI without producing any concrete legislative proposals. The EU Member States need to wait until the proposals have been finalised before taking further steps, but change is definitely coming.