Whilst AI has already begun to assist lawyers with tasks such as document review and data analysis, it's the more nuanced role of generative large language models (LLMs) in legal research and drafting that requires scrutiny particularly because of the risk of 'hallucinations'.
The English courts have already seen the first case where submissions were made which referred to fictitious authorities as a result of the deployment of AI. Cases like this raise the question: should liability fall upon the lawyer who relied on these hallucinations, or the creators of the LLM which produced the hallucinations?
Hallucinations
Hallucinations are a well-known limitation of LLMs, where the model generates factually incorrect outputs that do not correspond to the "training data, [which] are incorrectly decoded by the transformer or do not follow an identifiable pattern". As noted in guidance issued for the UK judiciary in December 2023, there is a risk that generative AI "may make up fictitious cases, citations or quotes".
The recent case of Harber v Commissioners for HMRC [2023] UKFTT 1007 (TC) is the first reported UK decision where a litigant cited AI-generated fictitious case law. The tribunal noted that many harms may flow from such false submissions, including wasted time and resources for opposing parties and the court, deprivation of authentic legal arguments for the litigant, potential reputational damage to the legal system, and promotion of cynicism about the judicial system.
While Harber v Commissioners for HMRC involved a self-represented litigant who claimed ignorance as to the falsity of the cases cited, the implications for lawyers are significant. As generative AI is increasingly integrated into legal research and drafting tools, lawyers may unknowingly, or recklessly, rely on hallucinated case law, citations or reasoning. This raises questions about liability – is it the lawyer or the provider of the AI tool who should bear responsibility?
The burden of liability
As of yet, there does not exist a body of case law where lawyers themselves have submitted hallucinated material to the court. Consequently, it is currently unclear what the determination is likely to be in respect of liability. Regardless, the courts are unlikely to be sympathetic to lawyers claiming they reasonably believed AI-generated information was accurate, given that the risk of hallucinations is widely known.
Moreover, lawyers have professional obligations precluding the submission of inaccurate or inappropriate material before the courts – suggesting that irrespective of the actions of the LLM (and by proxy, its creator), lawyers making use of the tools need to carefully observe their own individual regulatory responsibilities. In fact, the guidance issued for the judiciary in England and Wales on the responsible use of AI went as far as to suggest that it may be necessary for courts and tribunals to "remind individual lawyers of their obligations and confirm that they have independently verified the accuracy of any research or case citations that have been generated with the assistance of an AI chatbot".
At a minimum it seems the judiciary will hold lawyers to account for their use of LLMs when making legal submissions if it conflicts with professional obligations. Lawyers should remain cognisant of this fact, and not expect to shift the burden of responsibility onto the provider of an AI tool in this respect.
Possible developments
The European Union is leading the charge in adopting a regulatory and liability framework to regulate the use of AI. Recent EU developments include:
- On 21 May 2024, the Council of the EU approved the EU AI Act. It was published in the Official Journal on 12 July 2024 and will come into force on 1 August 2024. It aims to mitigate hallucination risks by mandating data transparency for general-purpose AI models, requiring outputs to be traceable and explainable. This might involve the LLM generating an explanation of how it reached its answer to a query - providing more visibility into the inner workings of the AI tool in question.
- Revisions have been proposed to the EU Product Liability Directive, expected to come into force this year, followed by a two-year transition period - which would expand the definition of a 'product' to include software and cover AI models. Under the proposed amendments, AI system providers can be held liable for defective AI systems placed on the market on a strict no-fault liability basis, along with a number of other players in the supply chain including third party software developers and programmers.
The EU AI Act is, in large part, a form of product safety legislation and seeks to implement safety standards to minimise risks in relation to AI systems before they are placed on the market and throughout their lifecycle. Product safety legislation works hand in hand with liability legislation. Not all risks can be prevented and when they materialise, liability legislation steps in to compensate for the harm suffered.
While not directly applicable in the UK, these EU-level developments signal a trend towards greater accountability for AI providers that UK policymakers may choose to follow in the future. To date, the UK has no immediate plans to legislate in the AI area. It is currently left it to the relevant regulators to apply certain principles, following a proportionate and risk-based approach, to an actor whose activities create risk.
What should lawyers do now?
Ultimately, as generative AI becomes increasingly embedded in legal practice, a structured and proactive risk management strategy will be crucial for law firms. This should involve robust verification processes to ensure the accuracy of AI-assisted outputs, as well as complete transparency with clients regarding the use of these technologies. A specific AI policy should be drawn up to set clear parameters for its use. Lawyers must remain vigilant, as they risk facing significant liability if they fail to implement appropriate safeguards when deploying these powerful yet imperfect AI tools.
Should you require any further information on the issues covered in this article, please contact one of our Disputes and Investigations team.