Co-Author: Christian Zander
Introduction
Artificial intelligence (AI) has developed from a topic of the future to a central component in the transformation of various industries. Advancing digitalization through the use of AI is also opening up new opportunities in broker sales: it enables efficiency gains, a personalized customer approach and innovative business models. At the same time, however, it also brings with it legal challenges. This article summarizes the current developments and legal risks of AI in broker sales.
Possible applications of AI in broker sales
The ongoing digitalization in the insurance industry offers numerous opportunities to use AI to increase efficiency in the day-to-day work of insurance brokers. Various AI systems are already in use today:
- Chatbots and virtual assistants serve as the first point of contact for customers. They process inquiries automatically, answer common questions and can even process simple insurance applications, which helps to reduce the workload on employees.
- By using AI-supported sales analysis, large amounts of customer data can be evaluated in order to recognize patterns and identify sales opportunities. This makes personalized offers possible. Predictive analytics can be used to derive future customer behavior and develop targeted marketing strategies based on this. In addition, the use of automated underwriting systems enables faster decision-making by analyzing the risk of potential policyholders.
- In addition, AI facilitates the management of contracts and documents by taking over their organization and monitoring legal requirements. Other applications such as customer segmentation or fraud prevention expand the versatile spectrum of AI in the insurance industry.
Legal challenges
Although these new technologies offer a wide range of opportunities, they are also associated with a number of legal challenges:
- AI Act: On 1 August 2024, the AI Act came into force as the first comprehensive set of rules for the regulation of AI. This regulation sets out the framework conditions for the development and deployment of AI systems. The aim is to strengthen society's trust in AI systems and to enshrine basic ethical principles in law without hindering technical progress.
To this end, AI systems are divided into four categories according to a risk-based ap-proach: unacceptable, high, certain and minimal risks. Systems with unacceptable risk, such as those for manipulating human behavior or social scoring, are prohibited. High-risk systems, such as those that make decisions about people in sensitive areas, are subject to strict requirements. Systems with certain risks must be designed transparently so that it is clear that they originate from an AI.
The AI Act also has implications for the insurance industry.
Prohibited systems are unlikely to be used, but some systems could be classified as high-risk systems and would then have to meet strict requirements - for example in terms of risk management and data security.
However, the most common case is likely to be that AI systems used in broker sales are classified as AI systems with certain risks and must meet transparency requirements. These include, for example, chatbots and virtual assistants that have been developed to interact with humans. These AI systems must be designed in such a way that the users are informed that they are interacting with an AI system. If the AI system generates or manipulates texts, the provider must ensure that this is made clear in machine-readable form. Deployers of such AI systems that generate or modify texts and publish them in order to inform the public about important matters (e.g. journalists, content creators or scientific authors) are obliged to indicate that the text has been artificially generated or manipulated.
However, not only the AI Act, but also other regulations must be complied with when using AI:
- Data protection law: AI systems must be designed in accordance with data protection law. This means that data protection principles such as lawfulness, data minimization and purpose limitation also apply to AI systems. Transparency in particular - specifically the provision of data protection information for data subjects - poses a major challenge. One of the reasons for this is that automated decisions also require information about the logic involved, which is often difficult due to the complex functioning and decision-making of AI systems. Safeguarding other data subject rights, such as the right to information or erasure, can also pose problems for operators of AI systems. Companies should therefore ensure that their AI systems only process personal data if this is essential and has been legally checked beforehand.
- Copyright laws: Specific copyright issues also arise in connection with AI. Under German copyright law, the legal classification of the protection of AI input and output is complex. It can be assumed that AI-generated output does not constitute copyright infringement, provided that it is sufficiently distant from a protected work. However, such an infringement could arise from the reproduction of a copyrighted work during input. What is certain: the author can only be a natural person, not the AI itself.
- Protection of business secrets: Particular caution is required when dealing with business secrets. This includes customer lists, internal cost structures or specific know-how, for example. Ideally, this information should be treated confidentially and should not be incorporated into an AI system - neither as training data nor as input data.
Outlook
With the rapid advances of AI, the brokerage industry is at the dawn of a new era. The integration of AI systems has the potential to fundamentally revolutionize sales by enabling greater efficiency, personalized services and data-driven decision making. However, this technological change also brings challenges: compliance with regulatory requirements and the protection of sensitive data are crucial for the sustainable integration of AI into broker-supported sales.