13. August 2024
AI in focus: disputes and investigations – 1 von 6 Insights
One of the reasons clients opt for mediation and other forms of alternative dispute resolution (ADR) is the greater flexibility they can offer in contrast to the more rigid and traditional court system and Civil Procedure Rules.
That may be doing the courts something of a disservice, as last year we saw the first use of AI by a judge in the English court system, with Lord Justice Birss using ChatGPT to assist him in his judgment in a Court of Appeal case.
For clarity, LJ Birss did not entrust ChatGPT with any decision-making power but instead asked it to summarise an area of the law that he knew well. This created the element of quality control which remains essential for any use of generative AI (GenAI) language models, at least for now.
More widely, a key question when considering the likely impact of AI on the legal system is whether there are any specific areas in which it would be most effective. It has been suggested that one such area is mediation.
In addition to greater flexibility, it has been noted that the focus of mediation is assisting the parties to come to a resolution themselves, as opposed to court litigation or other forms of ADR in which a judge/arbitrator hears the case and comes to a judgment.
It is widely accepted that none of the GenAI models are at the stage where they consistently produce an accurate work product which does not require human input. As an example, hallucinations, where GenAI perceives facts/patterns which do not actually exist based on its available input, remain a problem. There are multiple examples in the last few years of lawyers being reprimanded for submitting legal briefs and submissions which contained legal authorities which turned out to be entirely fictitious and to have been hallucinated by ChatGPT.
Mediation, in contrast, arguably plays into the current strengths of GenAI, which include processing and summarising significant amounts of data, which could, if submitted to the model, potentially assist the parties in understanding each other's positions, narrowing the issues in dispute and ultimately coming to a decision themselves.
AI assisted mediation (with a human mediator/the parties being assisted by the technology) therefore sounds very plausible. However, what about AI-led mediation with no human mediator?
The challenge would of course come in situations in which a mediator would typically intervene, for example to arrive at a settlement figure or seek to unblock an impasse between the parties. In this regard, a Harvard Law School blog reports that GenAI was recently used in the context of a mediation in which the claimant was seeking US$550,000. The defendant stated that it was unwilling to pay more than US$120,000.
Tasked with generating a proposal for a settlement offer, the GenAI model proposed US$275,000 (perhaps unsurprisingly 50% of the amount claimed). In this case, the amount was agreed and the mediation was successful.
Whilst the settlement offer put forward by GenAI was far from imaginative, it is clear that as such models continue to develop, it could be a very useful tool in this regard.
There are other potential advantages and disadvantages of AI mediation:
It is a reasonable assumption that GenAI could reduce the costs involved in mediation (including the cost of a human mediator). It also seems likely that the parties would choose to hold the mediation remotely in these circumstances (saving further costs when compared with in-person mediation). We would note that GenAI is still in its infancy and that while models like ChatGPT 4 are currently free to access in a limited format, it is not clear what pricing models will be introduced in the future. Rather like the Internet, it feels likely that pricing models will continue to develop along with the technology.
It has been suggested that AI-led mediation could reduce the scope for human bias towards one side (including unconscious bias).
There is no suggestion that this is a particular problem within the mediation sector, however perhaps the key point is not the objective reality but what clients may conceive to be the case – ie they feel that a human mediator is not properly taking account of their position for whatever reason.
A significant amount of research is being done into whether GeAI (most notably ChatGPT) can in fact be considered objective and neutral.
To this effect, research has found racial, cultural and political bias within ChatGPT. A study by the University of East Anglia found left wing bias to be 'systemic' within the model.
The point, of course, is that GenAI models create results based on the data they have available to them. They are therefore only as objective (and factually correct) as the information provided. ChatGPT was fed an enormous data set in order to pre-train its algorithm (GPT stands for Generative Pre-trained Transformer). In contrast, the approach taken to AI mediation may be to seek to provide the tool with enough information to train it properly (including to understand the mediation process and potentially some key legal principles) but potentially using a more closed model when compared with something like ChatGPT (as inputting too much information could increase the scope for bias/decision making based on irrelevant factors).
Whilst the risk of human bias (may) be an advantage of AI-led mediation, some have argued that a bigger risk would be a lack of human empathy.
Mediation, arguably more than other forms of ADR, involves the mediator seeking to give both sides a platform to present their case and collaborate to reach a mutually acceptable position. In this regard, in cases where parties are entrenched in their position and emotions are potentially running high, it is unclear whether a GenAI model could ever have sufficient emotional intelligence to navigate this.
GenAI cannot detect body language or tone or adjust its own tone to reflect the parties' behaviour. How will the model respond if a lawyer or lay client were to become rude or aggressive?
As part of our participation in the London International Disputes Week (LIDW) 2024 series of events and specifically a panel discussion on AI and litigation, we asked Taylor Wessing's own GenAI tool, LitiumTW, to produce a suite of documents based on a fictional case study where the parties had agreed to engage in a mediation process. The output included a mediation statement (to be exchanged before the mediation meeting), an opening statement for the mediation meeting, and an initial strategy document which identified some of the strengths and weaknesses of the case and helped to inform the mediation-related documents. The material produced served as a helpful first draft, which a lawyer could develop. A key learning was how important proper prompt curation is when aiming to generate complex textual output. To get the most out of LitiumTW and other GenAI tools, prompts have to be precise, detailed and logically structured – on occasion even re-worked if the generated text is insufficient. As the models grow in sophistication, so too will our understanding of how to get the most out them.
It is clear that GenAI has significant potential for use in legal contexts such as mediation. It could bring advantages such as cost savings and speed which are central to why lawyers and their clients push for ADR in the first place.
However, the limits of current GenAI models are clear and there are important issues such as hallucinations and potential bias to work through before clients are likely to put their confidence in the technology.
Should you require any further information on the issues covered in this article, please contact one of our Disputes and Investigations team.
9. August 2024
von Kate Hamblin
2. July 2024
9. July 2024
von Ben Jones
16. July 2024
von mehreren Autoren
23. July 2024
von Ben Jones
von Katie Chandler und Helen Brannigan
von Katie Chandler und Emma Allen
Part 2 of 2
von mehreren Autoren