The future role of artificial intelligence (AI) does not always get the best press. Titles such as "Superintelligence: Paths, Dangers, Strategies", "The Black Box Society" and "Weapons of Math Destruction" can leave one feeling that the future of AI is a dangerous path filled with peril – our day to day utility destined for replacement by a smarter, more efficient and secretive robot.
However, the ever-increasing role of AI is fast becoming a certainty. The government announced just this month, in a press release creatively entitled "[p]rojects lay the groundwork for a future of robolawyers and flying cars", that the Solicitors Regulation Authority (SRA) will award nearly £700,000 to fund AI projects to transform the legal services market for small businesses and consumers. This is but one of the many attempts to increasingly use AI projects in the legal sphere to assist lawyers. However, going one step further, this article explores the role of not just the robolawyer but the robojudge.
While AI tools for robolawyers are tools to assist, the role of the robojudge is quite different. These tools either assist in determining the dispute, actually determine an element of the dispute or indeed determine the entirety of the dispute.
Algorithms are structured decision-making processes. A set of rules is decided upon to deliver results according to those set rules. These predetermined parameters define the limits and elements of the decision. Further if machine learning is utilised, the AI system may learn by considering both previous decisions and new data, thus refining its processes to improve its decision making for future decisions. When put like that one can see the appeal in AI in the judicial sphere. It reflects in many ways how we believe current decisions are made and indeed should be made – learning from previous decisions and taking into account new information.
However, the stakes are high in the field of judicial decision making and so a degree of caution is necessary. As such, the topic of robojudges elicits a number of both technical and legal questions on the role and limits of AI in the judicial sphere:
It has been proven that AI tools can be very successful at predicting the outcome of decisions. Take for example the well-known predictive tool which analysed the European Court of Human Rights decisions. Predictive tools are of course useful in, for example, evaluating litigation risk and potentially in terms of indicating whether a dispute would be better settled. However, the crucial distinction will be what if any element of a dispute we trust to be determined by a robojudge.
One common criticism of AI, and its possible application to the role of a judge, is that AI is based off a set of data which is backwards looking. As such, the risk is that a robojudge must be design simply repeat the decisions, and mistakes of the past, and fail to develop the law in a critical way. In theory an AI judge could amplify the biases already prevalent in our system.
However, if one takes the example of AlphaGo, it can be seen that AI does not necessarily simply repeat old moves, decisions and conclusions but rather has the potential to identify creative solutions to old problems. Therefore, the presumption that robojudges would merely repeat the findings of old data is not a necessary conclusion.
However, the algorithm that forms the basis of AI judicial decision making is a real concern. One of the key objectives and tenants of the UK legal system is transparency. There are a number of reasons why the ‘black box’ question is a potential concern. A concern may arise as to whether individuals should be entitled to know the basis upon which a decision is being made or to refuse inspection. Resulting in what many call the “black box” of AI.
Therefore, the question is not so much whether AI can predict the outcome of cases, it seems that question has been answered, but rather should an AI judge be able to decide that outcome. In essence, whether a legal system should value the human responsibility of decision-making.
One difficult question that arises not just in the field of AI and the legal sphere, but many areas of AI is what standard do we expect of the AI? Do we expect AI to eliminate the risks, such as bias, found in our current system or simply reduce the likelihood of these risks? In an ideal world AI would improve the status quo in terms of eliminating current risks and concerns. However, as courts experience increased demand and consequently backlog – taking the example of India where the backlog is 22 million cases in the district courts – perhaps holding AI to the standard of actively improving decision making is too much.
The answer to the future role of robojudges does not need to be an absolute. A robojudge, could for example, assist in aspects of decision making. For example, where a judge wishes to evaluate the trend in a particular line of case law. This model is more akin to a judicial clerk or assistant as opposed to a judge. It may be that the more popular route is first to see how AI can assist judges and instead of asking if robojudges should determine disputes, which aspects of a dispute are worth resolving by the traditional method versus a robojudge.
 Department for Business, Energy & Industrial Strategy, ‘Projects lay the groundwork for a future of robolawyers and flying cars’ (5 October 2018).
 UCL news, ‘AI predicts outcomes of human rights trials’ (24 October 2016).
 The Guardian, ‘India's long wait for justice: 27m court cases trapped in legal logjam’ (5 May 2016).