2021年11月16日
KI-Verordnung / AI Act (dt./eng.) – 9 / 9 观点
There is growing interest in the insurance sector in the use of big data and artificial intelligence (AI) (subsequently referred to as “BDAI”). Reinsurance and insurance undertakings see this as an opportunity, and increasingly also as a business requirement in competition, to be able to better assess the (re)insurability of risks by means of using BDAI. In view of the advance of climate change, this applies in particular to the assessment of risks associated with natural hazards. In addition, the use of BDAI can support (re)insurance undertakings in developing new (re)insurance products or improving existing (re)insurance products. Last but not least, the use of BDAI can contribute to the creation of new ecosystems - especially in the more efficient design and execution of claims handling and also in insurance distribution. However, the use of BDAI by insurance undertakings not only brings new business opportunities, but also legal issues related to insurance supervision: The current proposal for a new AI Act by the EU Commission dated 21 April 2021 aims at legal harmonisation and legal certainty and classifies AI according to its impact on fundamental rights, security and privacy. It does not contain any requirements explicitly addressed to (re)insurance undertakings. The starting point of the AI Act is the classification of AI-based solutions into four categories according to their risk: “unacceptable, high, low and minimal.” The “high risk” category, which concerns applications that process personal data, might be particularly relevant for insurers: in particular, for example, in the analysis of contract data and thus the data of policyholders, on the one hand industrial companies, on the other hand, however, also natural persons. A “low risk” exists in the case of interactive tools that are clearly identifiable as software and where the users decide freely and on their own responsibility about their use. These include the chatbots now used by most German insurance undertakings in customer service. In parallel to the EU Commission, a consultative expert group set up by the EU insurance supervisory authority EIOPA (European Insurance and Occupational Pensions Authority) published EIOPA guidelines on 17 June 2021 with the paper “Artificial intelligence governance principles: towards ethical and trustworthy artificial intelligence in the European insurance sector”, which explain six governance principles to be taken into account when using AI: the proportionality principle, the principle of fairness and non-discrimination, the principle of transparency and explainability, the principle of human oversight, the principle of compliance with EU data protection law and the principle of robustness and performance of AI. The German insurance supervisory authority BaFin (Bundesanstalt für Finanzdienstleistungs-aufsicht) has already been long considering BDAI as part of its supervision (see BaFin study “Big Data meets Artificial Intelligence” from 2018). Technically, BaFin defines the term AI in this study “as a combination of big data, computing resources and machine learning (ML).” According to BaFin, machine learning means giving computers the ability to learn from data and experience based on specific algorithms. BaFin defines algorithms as rules of action, usually integrated into a computer programme, that solve an (optimization) problem or class of problems. In its current publication “Big Data and Artificial Intelligence: Principles for the use of algorithms in decision-making processes” dated 15 June 2021, BaFin explains that, in its view, there is not yet a clear-cut definition of AI and formulates principles for the use of algorithms in decision-making processes of financial companies. With regard to the development phase of AI, requirements are described on how the algorithm should be selected, calibrated and validated. With regard to the application phase, the results of the algorithm are to be interpreted by financial companies and integrated into decision-making processes. In this context, BaFin comments on (i) the conceptual framework, (ii) the overarching principles for the use of algorithms in decision-making processes, (iii) the specific principles for the development phase and (iv) the specific principles for the application phase, and also mentions concrete uses in each case.
BaFin itself intends to call these principles “preliminary considerations on minimum supervisory requirements for the use of artificial intelligence”. They are intended to provide a basis for discussion with various stakeholders and are explicitly embedded in the international regulatory projects mentioned above.
1 / 9 观点
2 / 9 观点
3 / 9 观点
4 / 9 观点
6 / 9 观点
8 / 9 观点
返回