Radar - October 2022 – 3 / 3 观点
The buzz around AI-based technology has been increasing in recent years, creating excitement for the future. However, existing legal frameworks are longstanding and weren't designed with AI in mind. The uncertainty around liability is one of the major barriers to its wider adoption. The existing EU civil liability regime simply isn't equipped to resolve claims for damage caused by AI products such as drones, smart devices, robots and vehicle automation. Businesses struggle to predict how existing liability rules will be applied to AI, and where they fit within the liability framework making it difficult to insure themselves against that risk. On the flip side, the barriers to claiming legal compensation when something goes wrong makes it difficult for consumers to trust and engage with these new technologies. Modernisation of the legal framework has been needed for some time now to address the liability challenges posed by AI technology and these new proposals for the EU come after a long period of debate.
Alongside its proposal for an updated Product Liability Directive, the European Commission has also published a proposal for a new Directive on adapting non-contractual civil liability rules to AI systems.
This AI Liability Directive aims to create certainty around liability for damage caused by AI-enabled products and services. This is proposed new EU law and post-Brexit will not be implemented in the UK. It will, however, impact those businesses operating in the European market and placing AI systems on the EU market.
Once this is implemented into Member States' national law it will provide recourse for users of AI systems to seek compensation from tech providers for harm suffered from using AI systems. Harm includes harm to life, property, health or privacy due to the fault or omission by a software developer, provider, user or manufacturer of AI systems. It is closely linked to the AI Act published in 2021 as well as the overall AI regulatory framework. These packages of reforms (together with the Collective Redress Directive) appear to be designed to strike a balance between consumer protection and business innovation but, in reality, the introduction of rebuttable presumptions and disclosure obligations will make it much easier for claims to be brought against tech companies.
The proposal provides for two main measures:
These measures aim to create a 'safety net' for compensation in the event of damage.
The first is a rebuttable presumption aimed at resolving the difficulties for claimants in establishing that the AI system caused the damage suffered (the so called "black box" issue). Where claimants can prove the AI system does not comply with the AI Act or other regulatory requirements (i.e. relevant national or European legislation), or a defendant does not disclose evidence as required (or it has been destroyed) there will be a presumption that the defendant breached the relevant duty and, as such, causation of the damage suffered will be presumed.
The second, creates a new disclosure obligation on tech companies responsible for high-risk AI systems (being those which have an impact on safety or fundamental rights) regarding technical documentation, testing and compliance. This, alongside the EC's proposals to revise the EU Product Liability Directive, are of real significance to European tech companies which for many years now have managed to escape disclosure obligations in civil claims. It will require careful planning to protect IP and trade secrets in these innovative products and as well as ensuring adequate document retention policies are in place to avoid any inadvertent destruction of documentation.
If adopted, this will have a significant impact for tech companies working on and developing AI systems. The proposal will shift the EU product liability regime for these advanced technology products and make it easier to bring claims for failures and non-compliance. The benefit for businesses adopting AI will be increased certainty regarding their potential liability. With the existing product liability framework, it has been unclear where AI sits given the difficulty in drawing a distinction between the product and the service. It will, however, also make it easier for claimants to bring claims (both businesses and consumers) which is a key consideration for businesses in their risk assessments and development of innovative AI systems.
Under the proposed directive, a causal link between the fault of the defendant and the output (or lack thereof) produced by the AI system will be presumed where three main conditions are satisfied(1) the demonstration of a fault of an AI system, (2) the reasonable likelihood that the fault influenced the output (or lack thereof) produced by the AI system, and (3) the demonstration that said output gave rise to the damage. In order to support claimants in demonstrating this fault, courts will have the power to order providers or users of high-risk AI systems to disclose information about their systems. Businesses should be aware of the possibility that they will have to disclose this information. This is a significant development for European tech companies which until now have benefited from limited disclosure obligations (if any) in the context of any civil claims. There are limitations, however, as disclosure is only required where it is proportionate and necessary. Further, the interests of the defendant will be considered when it comes to protecting interests such as IP, trade secrets and confidential information.
For now, businesses should watch this space and await the outcome of the European legislative process which is likely to take a number of months. It may then take a few years before this new law is implemented by Member States at a national level. It remains to be seen if the UK will follow suit but we do expect to see a White Paper in due course addressing similar issues.
If you would like further information on the EC's proposals, please contact our product liability and safety team.