2024年7月16日
AI in focus: disputes and investigations – 5 / 6 观点
Users of generative AI tools such as chatbots should be aware of the risk that records of their interactions with these tools are likely to be disclosable in litigation or regulatory proceedings.
A user will generally access a chatbot via a website. There will therefore be a record of the interaction between the user and the chatbot within the user's browser history and potentially also on their computer device. The user may also have shared a copy of that interaction electronically with others or printed it in hard copy. There could therefore be multiple records of the interaction in different forms which may be disclosable in litigation or a regulatory investigation unless the record is privileged. We consider below the requirements for the different claims to privilege.
An AI chatbot is a computer program: it is not a human. It is therefore unlikely to satisfy the requirement of being a 'lawyer'.
If the chatbot has been trained on data that is publicly available on the internet, then even if that data included information on the internet written by a lawyer, it is unlikely that the user would have entered into a retainer with that lawyer so as to create the necessary relationship between a client and their lawyer to benefit from privilege protection.
It will be interesting to see how the law progresses with regard to AI tools developed by law firms. If a client has engaged a law firm to provide legal advice, and that law firm has developed its own AI tool which it has trained on materials prepared by lawyers within that law firm, then it is at least arguable that the response provided by the AI tool to a prompt or question from the user (the client) constitutes data from a lawyer.
If the AI tool is an open model, which means that it has been trained on data that is publicly available on the internet, and the interactions between users and these models are not kept confidential, then the communication will not be confidential.
If the AI tool is a closed system which operates within a secure network, then the interactions between the user and the LLM may be stored securely within that network and therefore attract confidentiality. For example, Taylor Wessing has recently launched LitiumTW, which is a generative AI tool that operates on a closed system.
The dominant purpose of the user's prompt or question to the chatbot must be to obtain legal advice. It is therefore not sufficient if the user is seeking to obtain information from the chatbot that it intends to pass on to its legal adviser, who will then provide the user with legal advice.
A user will need to consider whether its interactions with an AI chatbot meet the following requirements:
As set out above, where the chatbot is based on an open model, it is difficult to argue that the chatbot is the user's lawyer.
A user may have a better chance in arguing that the chatbot is their lawyer where they have engaged a firm that has developed its own LLM.
A potential benefit of litigation privilege is that it can cover communications between the client and a third party. Whether privilege would cover the interaction will come down to whether the LLM is a third party. Given that the LLM is a computer program and not a human, whether privilege will apply in these circumstances is likely to depend on whether as matter of public policy, the courts and/or Parliament are willing to treat a computer program as a 'party'.
The same considerations will apply as set out above for legal advice privilege. Is the dominant purpose of the interaction use for the litigation/regulatory investigation?
The dominant purpose of the interaction must be use for the user's position in litigation which has already been commenced or which is more than likely to arise. The user must be operating the chatbot in order to prepare documents that will be used for those proceedings such as statements of case or evidence.
The courts have been clear that only Parliament can extend the scope of legal advice privilege to include advice given by non-lawyers. It is therefore unlikely that the courts would be willing to find that a computer program falls within the definition of 'lawyer' or that the output from a computer program constitutes 'legal advice' without legislation from Parliament. It is important to remember that the purpose of privilege protection is to enable clients to take legal advice on their affairs without the fear that those communications will be disclosable. It is therefore unlikely that Parliament will extend the scope of privilege to communications between users and computer programs generally as this would be too broad. It is however possible that privilege will be extended to cover the output from technologies that have been developed by lawyers and which work solely on data created by lawyers.
Users should be aware that there is a risk that records of their interactions with AI tools will not be privileged. Users should therefore:
Avoid storing conversations with chatbots on their devices or sharing with others if they are concerned about disclosing the conversation to third parties.
Should you require any further information on the issues covered in this article, please contact one of our Disputes and Investigations team.
作者
作者 Stuart Broom 以及 Tom Charnley
作者