Artificial intelligence (AI) is becoming increasingly important in more and more areas of life, including the legal field. We asked two of our litigators in the UK and Germany for their views on the matter.
Would you say there is a difference in the perception in the UK and Germany when it comes to AI in litigation?
- Laurence Lieberman: I am not familiar enough with the state of development of AI for litigation in Germany to make an accurate comparison. Speaking for the UK, there is currently a strong momentum among the judiciary and the court system to digitise the world of disputes, and that momentum is being absorbed by law firms, which means clients are now increasingly demanding adoption by their lawyers. There are regular new entrants to the AI and LitTech market, and we are working with some of the really exciting ones, and there is a real opportunity for lawyers and product providers to help shape the future.
- Volker Herrmann: For me it is quite clear that both in the UK as well as in Germany and probably everywhere else in the world, lawyers have realized that AI already has an impact on their everyday work. This impact will be even greater in the future. Even if for some colleagues this development is of ambivalent nature, I personally think that AI is and will be of great value for providing even better legal services for our clients. Especially in litigation, AI can improve risk-assessment and evaluation and thereby help us lawyers.
When will judges be replaced by AI?
- Volker Herrmann: I do not think we will see a replacement of judges by AI in the near future, if ever. Even if AI tools will most likely be implemented in "tomorrow’s court room" in order to support judges to find their decisions – which in itself will take a considerable amount of time, at least in Germany – the acceptance of judicial decision by those seeking judicial assistance is an important factor and imperative to the whole judicial system and its authority. Judges often have to decide cases, where their tasks go beyond adding up the facts of a case and finding the according law. Striking a balance between party interests often makes it necessary to search for the "truth" or rather a result in a particular case in between the parties’ positions. I am not sure, if and how this task might be completed by AI, let alone presenting according results to those taking legal action. Further, a regular court proceeding often involves hearing witnesses, experts etc. This makes human interaction necessary and it is hardly imaginable for the foreseeable future, that this kind of interaction can be provided by AI. However, there is nevertheless enough space for AI in the "courtroom" (eg issuance of a dunning notices etc).
- Laurence Lieberman: I think that in simpler, or lower-value cases, such as traffic accident or personal injury, we may see AI decision makers within the next 5-10 years. In complex business disputes, whilst I think that human judges may draw upon AI tools to help them sift through substantial data to identify critical factual disputes, I do not consider that human judges will be replaced completely by AI ones. Many cases are decided on the exercise of fine judgment, the credibility and honesty of witnesses, discretion, sympathy even, and the human ability to assess these will not be taken away by AI for many years to come.
What are the perils of AI in litigation?
- Laurence Lieberman: I think there is a danger around statistical accuracy. Any AI tool is only as good as the size and relevance of the dataset it analyses. For example, the huge volume of case law in the US might mean AI tools there are more useful than in other countries. Also, so few cases make it to trial and reported judgment, and so many cases go to arbitration, which is confidential, that AI case assessment products are inevitably reaching opinions based on perhaps 10% of all disputes.
- Volker Herrmann: From my perspective, the challenge is twofold - first, lawyers need to understand the purpose and basic functioning of a particular AI-based tool before using it, as we cannot expect a tool to deliver good results, if we do not know how to use it. Secondly, there is no regulatory framework yet, which would provide answers to essential questions, eg to which extent a lawyer is able to rely on results found by AI-tools. Also, as far as the technical aspect is considered, we should bear in mind that technical systems have always been a common target for manipulation and exploits. Of course this not a particularity of AI, but of all technical systems.