7. Januar 2025
It seems inevitable we'll see AI-related disputes over the next few years and the courts will need to grapple with applying traditional legal concepts to new technologies.
In this article, we take a brief look at the unique characteristics of AI which raise novel liability questions as well as the current and developing regimes potentially available to compensate users for damage suffered when AI systems fail.
The nature of complex AI systems makes it difficult to determine how harm has occurred. The complexity, so-called 'black box' autonomous behaviour and lack of predictability as well as continuous learning functionalities of AI systems make traditional concepts like breach/defect and causation difficult to apply.
Multiple parties are also typically involved in developing and deploying AI which can make the question of who is liable for a failure difficult to assess. Is it the:
Or was there an issue with the AI system itself?
Negligence
To establish negligence, a claimant must prove the defendant owes a duty of care which requires a sufficiently proximate relationship between the parties. This may be difficult if the defendant (manufacturer/developer/supplier) has not retained any control over the AI system.
Proving causation might also be difficult if it is hard to identify how a failure occurred, where in the supply chain, and which party is responsible, particularly where the AI system has continued to develop after initial deployment by way of autonomous machine learning. The unpredictability of AI systems might also make it difficult for a claimant to prove their loss was foreseeable.
Breach of contract
Contractual claims might arise under statutory warranties or implied terms if an AI system is not fit for purpose, of satisfactory quality, or it does not match the description. It is though debatable whether AI qualifies as a 'product' for these purposes. Furthermore, contractual clauses excluding or limiting liability may not cover AI and there remains significant risk for defendants seeking to rely on them in business to business (B2B) contracts as they will be subject to the test of reasonableness.
Strict liability
The Product Liability Directive 85/374/EC (PLD) establishes a strict liability (no fault) regime enabling consumers to pursue a claim where a defect in a product has caused personal injury or property damage. The courts have, however, found that software which is not embedded in hardware does not constitute a 'product'. As such, there is uncertainty as to whether intangible code underpinning an AI process would be a 'product', leaving a gap in the law.
The EU is leading the charge in updating the regulatory and product liability frameworks to ensure they are befitting of new and innovative technologies. The New Product Liability Directive 2024/2853 (New PLD) came into force on 9 December 2024 and will repeal the nearly 40-year-old PLD. Member States must implement the changes required by the New PLD into their national laws by December 2026. During the transition period, both frameworks may apply in parallel, depending on the product’s time of placement on the market.
The New PLD is like its predecessor in that it is a strict liability regime, but it also introduces some significant changes. In particular, the scope specifically includes software and AI, irrespective of the mode of supply, usage, whether embedded in hardware of distributed independently. Whilst small software developers can contractually exclude recourse by the final manufacturer of a product which integrates an AI system, they can't exclude liability for death or personal injury.
The broader remit is significant as it means AI system providers, third-party software developers and other players in the supply chain can be held liable where a defective AI system causes harm (including damage to mental health or destruction or irreversible damage to data for the first time). This is even if the defect was not their fault.
Furthermore, when assessing defectiveness, courts must take into account compliance with relevant mandatory product safety requirements. This will include the new Artificial Intelligence Act (AI Act) which will be effective from 2 August 2026. The AI Act sets safety standards which need to be met (and tested via a conformity assessment) for AI systems before they are placed on the market and monitored throughout their life cycle to minimise the risks they present. A breach of those safety standards could lead to a finding of defect under the new PLD.
Product liability will also extend beyond the point at which the product is placed on the market such that a manufacturer who retains control can be liable for a later defect caused by a software upgrade or update, or due to the continuous learning of an AI system. Equally, another party which substantially modifies a product after it is placed on the market can be liable if the modification causes damage.
The New PLD aims to provide businesses with certainty when supplying or deploying AI systems in respect of the liability risks they face. However, in reality, the legislation is claimant friendly and likely to hinder rather than promote innovation.
This is particularly the case given that defectiveness will be presumed in cases involving complex AI systems where claimants would face excessive difficulties in proving their case (provided they demonstrate that the product contributed to the damage, and it is likely that it was defective, or that its defectiveness is a likely cause of the damage). Furthermore, defendant companies will have more extensive disclosure obligations compared to consumers, including to provide detailed information on the AI system used in a product. This is a significant change to the rules of civil procedure in most European jurisdictions.
Case law in this area is still limited. It remains to be seen how the courts will apply existing and new liability frameworks to questions of AI liability. Businesses can take certain steps now to mitigate the risk of liability in case of AI failures, such as the below.
As AI continues to evolve and become more integrated into various sectors, the question of liability is becoming increasingly complex. The answer will be shaped by a combination of updated legislation, regulatory oversight, and case law and we recommend stakeholders continue to monitor these developments closely.
von Katie Chandler und Matthew Caskie
von Katie Chandler und Helen Brannigan