There is growing interest in the insurance sector in the use of big data and artificial intelligence (AI) (subsequently referred to as “BDAI”). Reinsurance and insurance undertakings see this as an opportunity, and increasingly also as a business requirement in competition, to be able to better assess the (re)insurability of risks by means of using BDAI. In view of the advance of climate change, this applies in particular to the assessment of risks associated with natural hazards. In addition, the use of BDAI can support (re)insurance undertakings in developing new (re)insurance products or improving existing (re)insurance products. Last but not least, the use of BDAI can contribute to the creation of new ecosystems - especially in the more efficient design and execution of claims handling and also in insurance distribution. However, the use of BDAI by insurance undertakings not only brings new business opportunities, but also legal issues related to insurance supervision:
The current proposal for a new AI Act by the EU Commission dated 21 April 2021 aims at legal harmonisation and legal certainty and classifies AI according to its impact on fundamental rights, security and privacy. It does not contain any requirements explicitly addressed to (re)insurance undertakings. The starting point of the AI Act is the classification of AI-based solutions into four categories according to their risk: “unacceptable, high, low and minimal.” The “high risk” category, which concerns applications that process personal data, might be particularly relevant for insurers: in particular, for example, in the analysis of contract data and thus the data of policyholders, on the one hand industrial companies, on the other hand, however, also natural persons. A “low risk” exists in the case of interactive tools that are clearly identifiable as software and where the users decide freely and on their own responsibility about their use. These include the chatbots now used by most German insurance undertakings in customer service.
In parallel to the EU Commission, a consultative expert group set up by the EU insurance supervisory authority EIOPA (European Insurance and Occupational Pensions Authority) published EIOPA guidelines on 17 June 2021 with the paper “Artificial intelligence governance principles: towards ethical and trustworthy artificial intelligence in the European insurance sector”, which explain six governance principles to be taken into account when using AI: the proportionality principle, the principle of fairness and non-discrimination, the principle of transparency and explainability, the principle of human oversight, the principle of compliance with EU data protection law and the principle of robustness and performance of AI.
The German insurance supervisory authority BaFin (Bundesanstalt für Finanzdienstleistungs-aufsicht) has already been long considering BDAI as part of its supervision (see BaFin study “Big Data meets Artificial Intelligence” from 2018). Technically, BaFin defines the term AI in this study “as a combination of big data, computing resources and machine learning (ML).” According to BaFin, machine learning means giving computers the ability to learn from data and experience based on specific algorithms. BaFin defines algorithms as rules of action, usually integrated into a computer programme, that solve an (optimization) problem or class of problems.
In its current publication “Big Data and Artificial Intelligence: Principles for the use of algorithms in decision-making processes” dated 15 June 2021, BaFin explains that, in its view, there is not yet a clear-cut definition of AI and formulates principles for the use of algorithms in decision-making processes of financial companies. With regard to the development phase of AI, requirements are described on how the algorithm should be selected, calibrated and validated. With regard to the application phase, the results of the algorithm are to be interpreted by financial companies and integrated into decision-making processes. In this context, BaFin comments on (i) the conceptual framework, (ii) the overarching principles for the use of algorithms in decision-making processes, (iii) the specific principles for the development phase and (iv) the specific principles for the application phase, and also mentions concrete uses in each case.
Conceptual framework
- No general approval of algorithms by BaFin: BaFin does not generally approve algorithm-based decision-making processes. Instead, BaFin intends to examine and, if necessary, object to these processes on a risk-oriented and event-driven basis, for example in licensing procedures (especially for InsurTechs), but also through ongoing supervision.
- Scope of supervision: Supervision of algorithmic decision-making processes should follow a risk-oriented, proportional and technology-neutral approach. More intensive supervision would be appropriate if (additional) risks are associated with the use of an algorithm in decision-making processes, especially in the case of insurance undertakings.
Higher-ranking principles for the use of algorithms in decision-making processes
- Clear responsibility of the management: The management is responsible for the company-wide strategies and guidelines or policies for the use of algorithm-based decision processes. Accordingly, the management board itself is always responsible for material business decisions, even if they are based on algorithms. On the one hand, this requires an adequate technical understanding on the part of the management and thus increases the “fit and proper” equirements. On the other hand, the reporting lines and reporting formats must be designed in such a way that risk-adequate and addressee-appropriate communication is ensured.
- Adequate risk and outsourcing management: It is the task of the management to establish a risk management adapted to the use of algorithm-based decision processes. If applications are purchased from a service provider, the management must establish an effective outsourcing management system. In this context, responsibility, reporting and control structures must be clearly defined. The other already existing regulatory requirements (e.g. on outsourcing to the cloud) remain unaffected. The use case mentioned is the application of telematics tariffs in motor vehicle insurance.
- Avoid bias: BaFin remains vague in its statement that bias, i.e. the systematic distortion of results, must be avoided in algorithm-based decision-making processes. In accordance with the “polluter pays” principle, it will be particularly important to analyse where a bias occurs or can occur in the first place. This can be quite difficult in practice.
- Exclude legally prohibited differentiation: Certain characteristics may not be used for differentiation - i.e. for risk and price calculation. In particular, it is demanded that companies establish (statistical) verification processes that exclude discrimination.
Specific principles for the development phase
- Observe data protection rules: Any use of data in algorithm-based decision-making processes must be compliant with applicable data protection rules.
- Ensure correct, robust and reproducible results: The overall goal is to ensure correct and robust results. The results of an algorithm should be reproducible. The user should therefore be able to reproduce the results, for example, in the event of a subsequent check by an independent third party.
- Documentation for internal and external traceability: Sufficient documentation is a prerequisite for checking algorithms and the underlying models.
- Appropriate validation processes: Each algorithm should undergo an appropriate validation process before being adopted for operational use.
Specific principles for application
- “Putting the human in the loop”: Employees should be appropriately involved in the interpretation and utilization of algorithmic results for decision-making. The extent to which they are involved should depend on how business-critical the decision-making process is and what risks it involves. Sanction screening in money laundering detection is cited as a use case.
- Intensive approval and feedback processes: When using algorithmically generated results in decision processes, the situations that entail a more intensive approval process should be clearly defined in advance.
- Establishment of emergency measures: Companies should have measures in place to maintain business operations in the event of problems with algorithm-based decision-making processes. This applies at least to business-critical applications.
- Ongoing validation, higher-level evaluation and appropriate adjustment: In the context of practical application, algorithms must be validated on an ongoing basis in order to check functionality and deviations on the basis of defined parameters and make adjustments where necessary. According to BaFin, validation is particularly necessary if new or unforeseen internal or external risks arise that could not be taken into account when the algorithms were created.
BaFin itself intends to call these principles “preliminary considerations on minimum supervisory requirements for the use of artificial intelligence”. They are intended to provide a basis for discussion with various stakeholders and are explicitly embedded in the international regulatory projects mentioned above.