10 June 2024
In recent years, the financial services industry has experienced a digital evolution marked by the wide-spreading adoption of new technological solutions that have shaped the new digital financial ecosystem of today's world.
End of last year, the rapid emergence of generative artificial inteligence (AI) based systems, has sent the shockwaves throughout the industry that has been relying on the AI based tools for quite some time now. With the aim of achieving better cost efficiency and potentially better investment outcome, the financial institutions have been increasingly experimenting with the use of the maschine learnin based computer algorithms for the better part of the last decade.
Algorithmic and high frequency trading systems together with the new type of automated systems that can be used for investment advisory and portfolio management business, the so called robo-advisors, have become a standard in the financial services industry. The recent generative AI boom has just sparked the new discussions about the topic and set the AI on the top of the list of priorities of the c-suites across the board, financial services entities included.
The deployment of AI in retail investment services, like investment advisory and discretionary portfolio management, promises improved time and operational efficiency for financial institutions as well as more digital friendly experience and potentially better investment outcome for the clients. However, this potential can hardly be treated individually without the due consideration of some inherent risks that these AI systems may pose to the customers on the one and the financial system on the other side.
Concious of these risks that are particularly sensitive in the retail investment space, on 30 May 2024, the European Securities and Markets Authority (ESMA) has issued the first targeted statement that is intended to serve as a general guidance for investment firms looking to deploy AI based systems for the purposes of provision of retail investment services. The statement provides the EU investment firms in particular with the guidance on how to ensure their compliance with the MiFID II framework when using AI based systems.
In its statement, ESMA has acknowledged that AI as a technology can be used in various areas of the financial services industry, each of which may be associated with different risks that are to be duly considered by investment firms. To that end, ESMA has listed some common use cases for the use of AI that include:
To find out more about the potential use cases for the use of AI in financial services sector and the regulatory considerations around this, check out our dedicated article “AI in Financial Services: Embracing the new reaility”.
In its statement, ESMA has listed some key risks that the use of AI in retail investment services may pose to financial institutions, that include in particular:
In its statement, ESMA has restated the key obligation of the investment firms relying on the AI based systems for the provision of investment services, to ensure compliance with key MiFID II requirements, in the same way as when using any other system or technology for the same purpose. To that end, despite the fact that the EU AI Act, will regulate the use of AI across various industries on horizontal basis, investment firms need to ensure that they first and foremost fully comply with their sector specific regulatory requirements under the MiFID II framework.
Some key regulatory requirements that the investment firms need to keep in their focus include the following:
Investment firms must ensure that their management body takes the leading role in the process of integration of AI systems within the organisation in order to ensure proper compliance with MiFID II organisational requirements.
For this purpose, the firm’s management board should have an appropriate understanding of AI as technology, the way it functions and the ways in which it can be applied and used within their firm. Likewise, the management board is requried to ensure appropriate oversight of the use of AI based systems.
Further, the management board must ensure that the firm‘s use of AI systems is in line with the firm's overall strategy, risk tolerance, and compliance framework.
ESMA expects the firms to have a transparent relationship with their clients when it comes to the use of AI in the provision of their investment services. To that end, firms are required to inform their clients about how exactly is the AI used for the provision of the regulated service and to provide them with this information in a fair, clear and non-misleading way (i.e. avoiding technical and confusing language that an ordinary retail investor is rather unlikely to understand).
It is also crucial for the first to ensure that the output generated by AI system is fit for purpose and suitable for the needs and preferences of a particular client. In order to ensure this, the firms are expected to rely on their internal processes for teh suitability assessment and enure that the recommendatsions and decisions generated by AI systems are aligned with the client’s financial situation, investment objectives (including sustainability preferences and risk tolerance), and knowledge and experience.
Where the firm uses AI in client-interaction systems, like chatbots, the clients are likewise to be duly and properly informed about the fact that they are communicating with an AI based system.
Investment firms must ensure that they have effective risk management frameworks specific to AI implementation that can take into due consideration all risks that may be associated to AI as such.
For this purpose, investment firms should establish robust governance structures and conduct regular AI model testing and monitor AI systems to identify and mitigate potential risks and biases. Internal testing and monitoring processes and plans should be aligned with the scale, complexity, and potential risks associated with the use of AI systems in a particular case, ensuring that specific attention is given to areas where AI has the most significant influence on the firm's processes and client services related to the provision of retail investment services.
Concious of the fact that the quality of the output generated by AI systems is only as good as the data fed into the algorithm, ESMA has emphasized that it expects the investment firms to ensure that the data used as input for the AI systems is reliable, sufficient and representative, ensuring that the algorithms are trained and validated on accurate and comprehensive and sufficiently broad datasets.
When the firms rely on AI systems developed and operated by third parties (like specialized IT service providers), ESMA expects the investment firms to ensure compliance with MiFID II requirements on outsourcing arrangements as well as in particular the EBA Guidelines on outsourcing arrangements that regulate this matter in more detail.
It is quite noteworthy that many of the above mentioned risk-management related points that are contained in the statement in many ways replicate the key requirements of the new EU regulatory framework on digital operational resilience based on the new Digital Operational Resilience Act (DORA) that will in many ways require investment firms to comply with new risk management, testing, back up and recovery and incident management requirements related to the information-communication technology (ICT) based part of their business.
To find out more about DORA, please check out our webinar series „Exploring DORA“ that provides an overview of the new framework and explores some key pratical considerations that the financial entities and IT service providers shall keep in focus in the coming period.
In line with the key record keeping requirements under MiFID II, ESMA expects the firms to maintain comprehensive records of information about the use of the AI in the provision of the regulated service or key back-office functions.
The records shall in particular include information about the way in which AI was used in the decision-making processes, what were the data sources and how were they analysed.
It is undisputable that the use of AI in the financial services sector presents a big opportunity for the entire industry that could reach new scales in terms of time, operational and cost efficiency as well as potentially much better investment outcomes. However, the challenges that the industry will be faced with, from both the regulatory as well as the risk management standpoint are not to be underestimated.
In its statement, ESMA has emphasized that it will aim to help the firms on their way to utilization of AI in the investment services space by fostering transparency, robust management of relevant risks associated with the use of this technology as well as compliance with key regulatory requirements under the MiFID II framework.
Whereas the statement is just the very first step that ESMA has taken in this area, ESMA has clarified that the firms are strongly encouraged to seek further guidance on this topic and engage constructively with their national competent authorities that can help them navigate the complex maze of intersections between the regulation and the AI technology in the investment services space.
ESMA’s guidance, is very welcome development in this rapidly emerging area of the financial services sector that is becoming ever more important. That being said, it can be concluded that this represents a very helpful initial guidance for the firms looking to use AI in investment services space that are in the coming period required to navigate complex regulatory maze comprised of the new AI Act and DORA, both of which will have huge impact on the firms that are active in this space.
With different jurisdictions taking different paths in terms of regulatory classification of NFTs, the question can be raised: where the EU is currently standing, and more importantly, where it is heading when it comes to this topic?
by multiple authors