It's been a busy year in AI and law with court decisions in the UK and the US illustrating how the courts are adapting to dealing with the new and novel legal challenges posed by AI. In this article, we have pulled together some of the most notable AI legal developments from the year.
Fake citations
The potential for misuse of AI has recently come under scrutiny in the courts, with solicitors and barristers found to have cited fake cases in pleadings, witness statements and applications as a result of the use of AI. This has resulted in public criticism, wasted costs orders and referral to the professional regulators for those involved. There are two recent English cases where the courts have given guidance.
- In the case of Ayinde R (On the Application Of) v Qatar National Bank QPSC & Anor [2025] EWHC 1383 (Admin) the legal team cited five fake cases and the barrister was unable to give a proper account, when challenged, of how this had happened. The President of the King’s Bench Division gave important guidance on the use of AI in court proceedings in light of lawyers’ professional and ethical responsibilities and duties to the court and invited the regulators to consider urgently what further steps they should be taking.
- In the case of D (a child) (recusal) The mother, representing herself (a litigant in person), submitted a "lengthy" skeleton argument containing several erroneous or non-existent citations, which she admitted were generated with the assistance of artificial intelligence. The Court of Appeal determined the mother did not intend to mislead the court and, while sympathetic, the judges emphasised that all parties are responsible for ensuring the cases they cite are genuine and provide valid legal authority.
These cases serve as a reminder of the importance of using AI appropriately and always checking all outputs – the court is clear that the responsibility is on the individual to ensure the information put before the court is true and accurate.
Updated judicial guidance on the use of AI
The initial guidance issued in December 2024 has been updated twice this year. The October 2025 guidance adds to the glossary of common terms and expands on the risks of bias in training data and risks of AI hallucinations which generate incorrect or misleading information. It provides further advice on confidentiality, reminding judicial office holders not to enter private information into public AI tools, and signposts where to report any inadvertent disclosures as data incidents. Lord Justice Birss, Lead Judge for Artificial Intelligence, stated: "The use of AI by the judiciary must be consistent with its overarching obligation to protect the integrity of the administration of justice and uphold the rule of law".
Bar Council Guidance on AI
On 27 November 2025, the Bar Council published updated guidance on the use of ChatGPT and GenAI. It concludes that there is nothing inherently improper about using reliable AI tools to assist in the provision of legal services, but they must be properly understood by the individual practitioner and used responsibly. The guidance hub sets out the key risks with LLMs: anthropomorphism; hallucinations; information disorder; bias in data training; and mistakes and confidential data training.
Getty Images v Stability AI
The English High Court issued a mixed ruling that was widely seen as an initial win for the AI industry. The primary copyright infringement claims were dismissed because the training of the AI model occurred outside the UK, removing the territorial basis for the claim. On secondary copyright infringement, the judge ruled that whilst an "article" may constitute an intangible object, an AI model such as Stable Diffusion, which does not and never has stored the Copyright Works, cannot be an "infringing copy". Getty's claim therefore failed. Getty dropped some of its claims so there are issues that were not tested before the court. We anticipate that there will be more cases in this area.
Unitel Direct Ltd v Racing Edge Auto Repairs Ltd & Ors 2025 EWCC 3 (CC)
Also in the English Court, multiple claims were brought by Unitel Direct Limited against various defendants for alleged unpaid fees under purported verbal business-to-business contracts for online advertising. The legal issue was about the formation of contracts and Unitel relied on transcripts of telephone conversations purportedly generated using a third-party AI-driven transcription service which were a key part of the evidence. However, the judge questioned the reliability of those transcripts due to various inconsistencies and therefore gave limited weight to that evidence. The judge signalled that courts will demand robust foundations – full audio recordings, proven accuracy, consistent timestamps – before giving substantial weight to AI-generated documents. This highlights the growing need for standards that verify evidence produced by AI systems in litigation.
Boundaries in the courtroom: AI Avatars not welcome
Courts are establishing clear limits on how AI can be used in legal proceedings. A recent case in the US set an important precedent for the use of AI in courtroom proceedings. An attorney attempted to use an AI avatar to present arguments before a New York court, but the judges firmly rejected this approach. The court's decision underscores that while AI may be transforming many aspects of legal practice, there remain fundamental boundaries around court proceedings where human presence and accountability are required. The judges emphasised concerns about authentication, responsibility for arguments presented, and the fundamental nature of court proceedings as human interactions. These concerns are reflected in the steps being taken by courts and professional standards bodies to clarify when and how AI can be appropriately used in legal practice.
The Algorithmic Liability Frontier: UnitedHealth case
AI-driven decision systems are creating new liability exposures, particularly when they override human judgement. Another case in the US – a class action lawsuit against UnitedHealth Group and other insurers (Estate of Gene B Lokken et al v UnitedHealth Group, Inc et al) – exemplifies the escalating legal risks surrounding algorithmic decision-making in healthcare. Multiple class actions allege that insurers deployed AI to override physician determinations and deny coverage, despite documented high error rates in algorithmic assessments of patient needs. Plaintiffs – representing patients and estates of patients whose coverage was terminated – contend that the insurance provider's AI-driven denial of claims constitutes breach of contract, violation of good faith and fair dealing, unjust enrichment, and insurance bad faith. Defendants deny these allegations. This case highlights the "black box" problem of AI, where even developers may be unable to explain why an algorithm made its recommendations – creating novel challenges for both litigation and compliance strategies.
We anticipate seeing more commercial disputes generally and specifically relating to tech enabled fraud where AI is at the centre of the dispute. We also expect to see more copyright and trademark issues being tested in the court following the Getty case. Such disputes, as is evident from some of the cases referred to above, will acquire new complexity as they challenge the application of the usual elements of causes of action and both parties and judges will need to grapple with the application of established legal doctrines to new situations.