What has happened?
On 9 March 2026, the CMA published guidance to businesses on complying with consumer law when using AI agents. The guidance is useful for businesses using AI agents in a variety of settings, including handling customer queries, processing refunds, recommending products and managing marketing campaigns.
The guidance reminds businesses that they are responsible for what their AI agents do. This is so even if someone else designed or provided the AI agent to them. Particular issues to consider include the following.
Disclosure
The CMA guidance states that, "If you use an AI agent, consider whether you need to label it so you do not mislead customers into thinking that a service is being provided by a real person – if the fact they are dealing with AI rather than a person might affect people’s decisions then you should tell them. Do not overstate the role or AI involved in providing a service, or what it can or cannot do."
While disclosure and labelling is not mandatory, businesses should carefully consider whether failure to disclose/label could constitute a misleading practice or omission.
Training and testing
Businesses should consider what data their AI agents need to ensure that they comply with consumer law. They should think about how it will be prompted to:
- respect customers’ statutory rights and the terms of contracts (for example, to make sure cancellation rights aren’t being breached)
- avoid misleading customers (both through what they do and do not say)
- properly obtain any necessary consents required by consumer law.
Testing is crucial to ensure compliance and the guidance specifically mentions A/B and unit testing.
Monitoring
The guidance emphasises the need for an experienced human to continually monitor and test the performance of the AI agent to make sure that it is delivering the right results, behaving as intended and complying with consumer law. There should also be monitoring of complaints about the AI agent.
Having appropriate processes in place to ensure continuous monitoring and improvement will be key.
Act quickly if there is a problem
The guidance underscores the need for businesses to act quickly if there is a problem, for example, by refining prompts or workflows. It is especially important to act quickly if an AI agent interacts with large numbers of people (or its outputs could reach a lot of people) or vulnerable customers.
What does this mean for you?
While consumer law is no different when an AI agent is used, the risks of a problem are arguably higher. Businesses should ensure that:
- They carefully review the CMA's guidance, including the useful examples at the end.
- They remember that the expectations might be higher where a business is dealing with vulnerable consumers or a large number of consumers. They might also increase in certain sectors.
- Appropriate and effective processes and policies are put in place to ensure that disclosure, testing, and monitoring are performed as necessary, and that it is clear what staff should do if a problem is identified.
- A human with appropriate experience and knowledge of consumer law undertakes training, testing and monitoring.
- Staff are adequately trained on these issues.
Failure to do the above not only increases the risk of a breach of consumer law but could also increase the severity of any sanctions (including fines) imposed if a breach occurs.