In recent years, the rapid emergence of large language models, sparked by the mass adoption of generative-AI chatbots, has put AI back at the top of the C-suite agenda across the financial services sector. Given that the use of automated systems (from algorithmic and high-frequency trading tools to robo-advisory systems) has long been embedded deep into the financial services industry, financial institutions were quick in reacting to the latest AI boom, by openly embracing the potential that the use of this technology may bring to them and their customers.
As AI deal values continue to rise and firms compete in sizes of their AI budgets and the level of AI adoption, the technology itself is evolving at rapid pace. It is this rapid evolution which has managed to fuel in recent months growing discussions about the next frontier in the AI space that promises to go a step beyond gen-AI systems that we already know: its name is agentic-AI.
Agentic AI in a nutshell
There is hardly a uniform definition that would describe what agentic AI exactly is.
In recent years we all pretty much got used to using large language model-based AI chatbots that based on the instruction coming from the user, in the form of AI prompts, are able to deliver some AI generated output: once user asks a question or sets a task, the AI chatbot is able to deliver a response in the form of the AI generated text, image or a video. As such, AI chatbots are heavily dependent on the user input and their ability to deliver AI generated content is limited to the scope of the AI prompt in question.
Agentic AI systems (also commonly known as AI agents) on the other hand, whilst building on the generative AI techniques, are featured by a greater degree of autonomy given that once the user sets the scope of their task, they are able to operate without any further instructions from the user and without any need for human oversight of the process. To that end, an AI agent is able to initiate one or more actions with the aim of achieving the goal set by the user, including by calling external tools (e.g. by not only telling the user where the product in question might be found, but also by initiating the purchase process at the merchants website).
AI payments agents
Whilst potential benefits of the use of agentic AI may arguably be seen in pretty much every corner of the financial services industry, the payment services industry appears to be particularly standing out in comparison to others.
From the consumer perspective, the use of AI agents for online payments may be particularly tempting due to a greater degree of convenience and time saving that it may bring to consumers. For instance, instead of browsing the internet on their own, consumers may create an AI agent with a specific task to look for a specific product(s) across different websites within a specified price-range. Once the AI agent comes across the products that match the criteria set in the prompt (e.g. category, color, size and price), it can automatically create the purchase order and initiate a payment transaction without requiring the user to do anything further. Where the task is set at completing several purchases, the AI agent can continue looking for the products that match the criteria set by the user and continue purchasing them across various e-commerce websites over and over again.
That being said, the use of AI agents in the above described way, goes way beyond the use of AI assistants we already had a chance to see across various e-commerce websites because of one fundamental difference: whilst AI assistants merely assist the consumer to find the right product, AI agents aiming to complete the purchase order are effectively competing the purchase on behalf of the consumer, thereby initiating a payment transaction on their behalf. This brings their activity within the scope of the applicable payment services regulatory framework and therefore requires careful maneuvering of a complex regulatory environment where the boundary between pure technological and regulated payment services can sometimes be rather thin.
Payment services regulation
When it comes to the use of AI agents for the completion of payment transactions there is a number of regulatory considerations that both regulated and non-regulated entities need to stay mindful of. The Second Payment Services Directive (PSD2) which is the cornerstone of the payment services regulatory framework, despite not explicitly mentioning the use of AI in the payment services sector, is technologically neutral and generally applies to the use of AI agents that make payment transactions on customer’s behalf.
License
Enabling others to initiate or execute payment transactions generally constitutes provision of regulated payment services under the PSD2 and anyone providing regulated payment services in the EU, generally needs to be authorized as a payment service provider (PSP) by the national competent authority in the EU Member State of their establishment.
Entities looking to provide solely a technical infrastructure that is leveraged for the purposes of the provision of payment services, are oftentimes able to benefit from a license exclusion under PSD2, insofar they are acting solely as technical service providers that do not come, at any point, in possession of client funds. Speaking of AI agents, an AI company developing an AI infrastructure that AI payment agents are operating on, may fall under the aforementioned exclusion. This generally requires however that someone else (usually a PSP supporting the payment process) is taking care of another part of the AI agent generated payment transaction flow in their capacity as the service provider that initiates and/or executes payment transaction on customer’s behalf.
That being said, a non-regulated entity can hardly bring a complete back-to-back AI payment agent product to the EU market, which enables initiation and execution of payment transactions on behalf of the customers without either being authorized as a PSP or without cooperating with a PSP.
Consent & Authentication
Under the PSD2, where a payment transaction not properly authorized by the customer occurs, the PSPs are in many cases held liable and are required to reimburse the losses to their customers.
For a payment transaction to be deemed authorized, a payer must have given their consent for its execution which can generally be given (depending on the agreement between the PSP and the payer) prior to or even after the execution of the payment transaction. The consent can also be given for a series of payment transactions. The PSD2 does not prescribe the exact form that the customer’s consent is to be provided in and leaves this to the terms of the written agreement concluded between the customer and their PSP.
In addition, for certain payment transactions (in particular online payments) the PSD2 requires PSPs to authenticate that the transaction is being made by the customer by applying the so called strong-customer authentication method (SCA). The SCA in a nutshell refers to a multi-factor authentication method requiring PSPs to authenticate the payer based on two of three mandatory authentication elements: something the payer is, something the payer knows and something the payer possesses (e.g. facial recognition via a mobile device owned by the payer).
In the context of the use of an AI payment agent, it is generally possible to contractually design the terms of the payment service agreement in a way that would enable consent provision for a series of payment transactions that would be initiated by an AI agent. The customer’s consent would in such case need to be limited to a set of criteria specified in the AI prompt. However, ensuring compliance with the SCA requirements on the other hand may be particularly challenging given that all SCA authentication methods are strongly tied to a person (the payer).
Disputes & Chargebacks
In cases where an AI payment agent, as a result of false product or price matching recognition or not rarely occurring AI hallucination effect, purchases a product that does not meet all criteria set by the customer, the payment transaction would generally be deemed as unauthorized. This can be particularly problematic where the PSP cannot fully understand why the AI agent has made a particular decision, leaving it without proper arguments to pushback against the customer’s claim (commonly known AI blackbox moment).
In these situations, the PSP enabling the customer to use an AI payment agent would generally be held liable for any losses that the customer might have incurred as a result of a false transaction made by the AI payment agent.
Furthermore, as many PSPs issue payment instruments of global card networks, the terms of their agreements should generally foresee chargeback and dispute situations where an unauthorized transaction has been initiated by an AI agent. Hence, careful redrafting of existing agreements with global card networks might also be required that can be rather tricky where they have not expressed their readiness to support the use of AI agents in relation to the use of their payment instruments.
Third-party risk management
Given that AI agent infrastructure comprises of various components from different players (ranging from AI providers, merchant payment infrastructure providers to customer PSPs) a critical factor for PSPs entering the AI agent payments space is third-party risk management. Namely, under the EU Digital Operational Resilience Act (DORA) PSPs are required to ensure that their service arrangements with providers of information-communication technology (ICT) services (like providers of AI systems and products), are designed in compliance with new requirements ensuring their digital operational resilience. This requires them (amongst others) to design their contractual arrangements in compliance with minimum contractual requirements, do proper due diligence and risk assessment of their vendors and ensure that external products and services fit into their broader ICT risk management framework.
Regulatory treatment under the EU AI Act
Whilst compliance with the payment services regulations and DORA is naturally a priority for regulated entities, entities looking to deploy AI agents in the EU are required to deal with a piece of horizontal regulation that specifically deals with the use of AI – the new EU AI Act.
The EU AI Act has introduced a horizontal regulatory framework applicable to the use of AI (across different sectors) which builds on top of the financial services regulation that applies to the use of AI payment agents. This directly applicable Regulation, set to start applying in full as of 2 August 2026, has created a harmonized regulatory framework governing the development, deployment and the sue of AI systems by following a risk-based approach with different risk-categories of AI systems to which different requirements apply.
The EU AI Act differentiates between entities that make available AI systems on the market (AI providers like tech companies) and all entities using AI systems, incl. those using external AI systems based on a third-party license agreement. Further, the AI Act differentiates between several groups of AI systems by introducing quite strict requirements for the so-called high-risk AI systems (e.g. used for assessment of creditworthiness or credit score) and transparency requirements for AI systems used for interaction with people (such as chatbots).
When it comes to deployment of AI payment agents, PSPs would generally be deemed as users of AI systems to which less onerous regulatory requirements under the EU AI Act apply. However, depending on the tasks that the AI payment agent is set to complete and more importantly based on which types of data, the AI payment agent may be put into different risk categories which would have for a consequence that the PSP in question would be required to comply with different regulatory requirements spanning from more simple transparency requirements for medium risk to more onerous regulatory requirements applicable to high-risk AI systems (like AI agents with the ability to assess user’s creditworthiness).
Outlook
With big technology companies, payment service providers and online marketplace operators increasingly focusing on potential benefits that may hide behind it, there is very little doubt about it that the level of deployment of agentic AI in the payments sector can only increase in the future.
However, from a regulatory standpoint, this is anything but an easy task primarily for the entities seeking to play a key role in enabling customers to leverage agentic AI for payment purposes: the PSPs. As such, they are expected to bear the highest degree of regulatory burden in this process and are required to carefully design their cooperation arrangements with other key stakeholders starting with technology companies providing AI systems and online marketplace operators aiming to deploy AI pipeline infrastructure that will facilitate data flows when an AI agent begins acting on a customer’s behalf within their website ecosystem.
As industry interest in agentic AI rises, so too does regulatory scrutiny and supervisory sensitivity whenever the topic is discussed. Therefore, in addition to payment services regulation, the digital operational resilience framework, and the EU AI Act, entities aiming to experiment with AI agents in the payment space should be mindful of the ever more complex web of supervisory guidance at the EU Member State level that builds upon these requirements.