AI-driven technologies open up entirely new possibilities in the payment market.
They enable companies to offer innovative products and services that were previously unimaginable. Big data can be used to offer better and more suitable products to customers. Machine learning can be used to develop personalised financial services tailored to each customer's behaviour and needs. Aimed at enhancing cost efficiency and improving customer experiences, AI offers numerous opportunities for innovation and faster processes in the payment market, but any advances must be balanced with EU regulatory frameworks like the AI Act, DORA, and existing risk management and supervisory requirements.
Use cases
Two prominent use cases in the payment market are AI-based chatbots and AI tools used for fraud prevention. Chatbots can handle various tasks -from answering simple customer inquiries to conducting complex transactions - thus improving customer service while relieving staff workload. And advanced algorithms allow AI to identify suspicious activities in real time and take appropriate measures promptly, significantly reducing financial losses due to fraud.
EU AI Act
Although using AI can ultimately save costs, the introduction of AI does require personnel and financial resources, not least to help navigate and comply with the surrounding regulatory framework for using AI in the payment market. The new player is the EU's AI Act which aims to comprehensively regulate AI technologies within the EU. Under the AI Act chatbots and some fraud prevention tools will be classified as special types of AI under Article 50. This means that strict transparency rules will apply to providers of AI-based chatbots. In particular, the customer needs to know they are talking to an AI. Providers and deployers of AI in the payment market will need to keep the AI Act in mind as it begins to bite.
DORA and risk management
The regulatory landscape for using AI in the payment market is much broader than just the AI Act. A payment institution planning to incorporate AI needs to address all requirements set out in the Digital Operational Resilience Act which will be applicable from 17 January 2025. DORA aims to ensure that all entities within the financial sector are resilient against digital threats, particularly cyber attacks and technical failures that could disrupt operations. Key requirements of DORA are sound risk management for all IT systems and that includes AI. Institutions must develop and implement a comprehensive risk management system specifically tailored to information and communication technologies (ICT). This system should aim to identify, assess, and mitigate all potential ICT-related risks which will include AI-related risks. Risk assessments need to be updated continuously as the IT infrastructure, including the AI tools, develops. These assessments should cover not only existing threats and vulnerabilities but also new and emerging risks. Regular reviews of security measures and adjustments to current threat scenarios are essential. This means the payment institution using AI needs to understand their AI in respect of the data used to train it, the expected output, and it also needs to monitor the output.
Where the AI tool is provided to the payment institution by a third party provider, outsourcing agreements need to be in place. These agreements should ensure that third parties also comply with DORA requirements, including by introducing robust risk management and monitoring processes. The outsourcing provider also needs to be supervised by the institution which should be built into the institution's risk management. Payment institutions will need to ensure all DORA requirements are observed by any third country providers, not least those in the USA which is a market leader in AI, and this has its challenges.
The supervisory authorities believe that the established risk management requirements that go beyond DORA and that are the basis of each business plan in the payment services market apply equally – and in a technology-neutral way – to automated models, technology-driven innovation, and AI. This means that entities utilising these technologies must ensure they have robust risk management systems and processes in place.
Fairness principle
The German financial regulator BaFin just published some supervisory guidance on the ethical standards of AI. Utilising AI can notably accelerate processes and enable the rapid and effective analysis of vast amounts of data. However, problems may arise when machines make decisions if bias creeps in. Highly automated decision-making processes with minimal human oversight can amplify existing risks of discrimination. Consequently, payment institutions and regulatory authorities are mandated to prevent unjustified discrimination against customers. Direct discrimination occurs, for example, when older people are disadvantaged in the provision of financial services due to their age. An example of indirect discrimination could be a procedure that makes this decision dependent on income levels. Since women earn less on average than men, they would be systematically disadvantaged. In both cases, such discriminatory practices can have significant legal and ethical implications for payment institutions.
Organisations must ensure that their policies and procedures comply with anti-discrimination laws and promote fairness and equality. This includes implementing measures to regularly review and adjust criteria used in decision-making processes to prevent any form of bias or unequal treatment based on age, gender, or other protected characteristics. Since the German regular BaFin is a consumer protection authority as well as a financial supervisory authority, non-discriminatory use of AI will be part of their supervisory angle. Therefore, this also needs to be kept in mind when applying AI e.g in chatbots. Data protection authorities will also be looking at any institution which fails to comply with provisions around solely automated decision making which has a legal or similarly significant effect on individuals, as well as other data protection considerations including the requirement that processing of personal data be fair and lawful.
Looking at the bigger picture
In conclusion, the integration of AI into the payment market requires more than mere compliance with the AI Act. The implementation of AI technologies must be carefully embedded within existing processes and the comprehensive regulatory framework that already governs payment service providers. This includes adhering to the requirements set out in DORA, which mandates a robust risk management strategy for all IT systems, including AI, as well as data and consumer protection laws.
Regular risk assessments and continuous adjustments to security measures are crucial to adapting to new threat scenarios. Institutions must ensure that both internal and external AI service providers meet DORA's stringent requirements and are thoroughly monitored.
Additionally, supervisory authorities like BaFin have outlined clear expectations regarding ethical standards in AI use. It is imperative to ensure that automated decision-making processes do not lead to unjustified discrimination and that all measures are taken to promote fairness and equality.
Overall, deploying AI in the payment market should be seen as an integral part of a holistic business and risk management strategy, aiming to foster technological innovation while meeting regulatory obligations.