On 26 October 2023, the Prudential Regulation Authority and Financial Conduct Authority published a feedback statement (Statement) on the use of artificial intelligence (AI) and machine learning (ML) in financial services. The publication coincided with Prime Minister Rishi Sunak's speech on AI, made ahead of the first ever global AI safety summit next week, at Bletchley Park.
The Statement summarises the responses the regulators received to their October 2022 discussion paper on AI and ML (Discussion Paper). The Discussion Paper was part of a wider programme of work considering the regulation of AI in UK financial services, including the AI Public Private Forum (AIPPF) and its final report (published in February 2022) and was set in the broader context of emerging AI regulation, principles and policies, including the publication in July 2022 of the government's policy paper, 'Establishing a pro-innovation approach to regulating AI'.
The 54 respondents to the Discussion Paper can be broken down into type of institution as follows:
- Industry body (12)
- Bank/building society (11)
- Technology provider (8)
- Consumer association (6)
- Insurance (5)
- Financial market infrastructure (3)
- Consultancy (3)
- Other (6).
There was not a significant divergence in opinion between sectors.
Summary of responses
The main points raised in the responses were:
- A regulatory definition of AI would not be useful. There was support for a principles-based or risk-based approach to the definition of AI, which would support international interoperability better than a more prescriptive approach.
- Given the rapidly changing nature of AI capabilities, the regulators could maintain 'live' regulatory guidance and examples of best practice.
- Ongoing engagement with industry is important. Initiatives like the AIPPF were useful and may provide a model for future engagement.
- The regulatory landscape relating to AI is seen as being complicated and fragmented. More coordination and alignment between domestic and international regulators would therefore be welcomed.
- Data regulation is considered by most respondents to be fragmented. More regulatory alignment would help to address data risks, particularly those relating to fairness, bias and the management of protected characteristics.
- One of the key areas that regulation and supervision should focus on is consumer outcomes, particularly how to ensure fairness and respond to other ethical dimensions.
- An increased use of third-party models and data is concerning and more regulatory guidance on this would be of assistance. The regulators' discussion paper, 'Operational resilience: Critical third parties to the UK financial sector', was identified as being relevant to this topic.
- AI systems can be complex and span many areas of a firm. This means a joined-up approach across business units and functions is needed in order to reduce AI risks. In particular, closer collaboration between data management and model risk management teams would be helpful.
- The existing principles for model risk management for banks, which are set out in Supervisory statement 1/23 (published in May 2023), are considered to be adequate to cover AI model risk. However, there are areas that would benefit from clarification to deal with risks relevant to models with AI characteristics.
- Respondents were of the view that existing governance structures (including regimes such as the Senior Managers and Certification Regime (SMCR)) were capable of addressing AI risks. The majority of respondents thought that it would be helpful to have further guidance on the interpretation of the 'reasonable steps' element of the SMCR in an AI context.
The regulators will make use of the responses to the Statement as they continue to consider issues relating to AI and ML in UK financial services.
Help is at hand
Our team has significant experience in FinTech and the digital transformation of financial services and can advise you on the regulatory considerations around the use of AI and ML in the financial sector.