2024年8月15日
Lending Focus - August 2024 – 4 / 6 观点
Artificial intelligence is on the advance due to the rapid development of new technologies and a wide range of applications. This blog post aims to give an overview of how large language models (LLMs) can be helpful in financing and how they can help to foster the efficiency and effectiveness of banking in a meaningful way.
LLMs are natural language processing (NLP) models that refine transformer-based language models with large language patterns through reinforcement learning, using a method of learning transfer. They can be used by companies to extract insights from large volumes of text data to improve their content creation. A well-known example for the use of LLMs is the use of automated chatbots in customer support.
Which types of LLMs exist?
There are several different types of LLMs. The most used are “GPT-4” by OpenAI, “PaLM 2” by Google and “Llama 2” by Meta. Each of these models has its own strengths and weaknesses. When choosing an LLM, the adaptability of the model, technical compatibility and costs should be considered in light of the business objective.
With LLMs, users can interact with an AI system via text-based conversations which they are familiar with. This allows people to communicate with technology using natural language.
Finance institutions can use LLMs for marketing initiatives such as financial education, product upselling and cross-selling and customer engagement and personalisation. In addition, standardised responses can be provided to frequently asked questions. LLMs can be used for gathering information, creating marketing materials, generating ideas, raising customer engagement and more.
This may have a number of benefits including the lowering of expenses, increased productivity and likely also client satisfaction. Some financial services firms have already started developing AI-enabled apps to boost their operational effectiveness, income and to provide better client experiences in all facets of the business, from banking to financial technology.
By considering the individual requirements and circumstances of customers or service providers LLMs are able to achieve more accurate and useful results. Analysing annual financial statements and checking compliance with regulatory requirements (eg IFRS) or long-term trends are also possible options. While assisting in automating financial reports, LLMs improve speed and accuracy and reduce manual errors.
LLMs can be a useful tool for portfolio management by offering data, analysis and support on investment strategy and decision-making topics. This can either be by way of information on specific businesses, sectors, or industries with the aid of investment research to facilitate in-depth investigation before making investment decisions.
The adoption of generative AI promises to enhance the effectiveness of financial markets. However, it is accompanied by several obstacles and legal restrictions such as data security and the slow evolvement of laws to regulate its use. The European Artificial Intelligence Act (AI Act) enters into force on 1 August 2024, however, compliance with it is phased in over a three year period. The compliance deadline for the majority of obligations is 2 August 2026.
As well as benefits, the use of LLMs in financial services also harbours some risks. For example, the results of using LLMs still need to be checked to prevent so-called “hallucinations” of AI bots. AI hallucination is a phenomenon wherein LLMs perceive patterns or objects which are non-existent or imperceptible to human observers and/or create nonsensical or inaccurate outputs. (Human) validation processes are therefore essential for the results of LLMs.
In addition, LLMs often encounter problems when dealing with complex or unusual situations, and the quality of the data available to the model can also influence the reliability of the result.
The use of LLMs in the financial sector also raises several ethical and legal issues. LLMs can inadvertently perpetuate biases present in the training data, leading to unfair or discriminatory outcomes in financial decisions. Some experts see a risk that LLMs could be used to manipulate markets by spreading misinformation or executing trades base on flawed analysis.
LLMs often operate as “black boxes,” making it difficult to understand how they arrive at certain decisions. This lack of transparency can be problematic in finance, where accountability is crucial.
Noting that financial services are heavily regulated it is an important endeavour to ensure that LLMs comply with all relevant laws and regulations. In addition to compliance with general legal standards, these include the transparency of decision-making processes for complex financial decisions and transparency towards consumers and customers. Also, it will be necessary to ensure human oversight of LLMs and AI applications, particularly in sensitive areas. Effective measures against potential market manipulation by AI systems must also be considered.
Handling sensitive financial data requires stringent security measures to prevent data breaches and to ensure compliance with regulations. One strategy to help achieve data security when using LLMs could be encryption: using strong encryption methods to protect data both at rest and in transit. This aims to ensure that even if data is intercepted, it cannot be read without the encryption key. Other ways could be the implementation of strict access controls so that only authorised personnel can access sensitive financial data by using multi-factor authentication and role-based access controls, data anonymisation or the monitoring of all activities involving LLMs to detect and respond to any suspicious activities promptly.
To sum up, LLMs should be considered as a helpful “assistance” rather than an omniscient “prophet” in Finance. They have the power to completely transform asset allocation procedures when applied correctly within predetermined parameters. The potential applications of LLMs in financial services are vast and continue to expand as technology advances.
However, there is still – and hopefully will always be - a significant need for human experts when using AI driven Large Language Models in Finance, eg in portfolio management as someone must ensure that LLMs collaborate with theoretical frameworks and investment theory to guarantee competence and dependability in confirming correct information from LLMs.
To discuss the issues raised in this article in more detail, please contact a member of our Banking and Finance team in Vienna.
作者 Carmen Redmann-Wippel 以及 Eda Koc