19 September 2024
Co-Author: Tim-Jonas Löbeth
The Al Regulation came into force on 1 August 2024. It created the basis for the regulation of artificial intelligence in the EU. Although it may still seem new to many, its regulatory concept is not. Instead, the EU is focussing on extending the proven model of the New Legislative Framework (NLF) from the product safety law to AI.
The background is simple: the CE standards must be interlinked. For CE conformity, the other CE standards must also be complied with in addition to the AI Regulation.
The "New Legislative Framework" (NLF) adopted in 2008 forms the framework for harmonised and up-to-date regulation of product safety in the EU.
Originally, the regulatory framework was limited exclusively to physical and movable objects. Software in particular was not covered by the NLF for a long time. Accompanied by the latest reform of the Product Liability Directive (Directive 85/374/EEC), the EU has changed this. The subject of the AI Act is "AI systems", which are detached from a physical entity. However, they can be part of a product. The definition of an AI system is based on the OECD definition and describes an AI system as a "machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The conception of the NLF was based on the assumption that the regulations issued within this framework would relate exclusively to movable and physical objects. As this basic assumption is now being changed, it is necessary to adapt one or two familiar principles of the NLF. This is particularly noticeable in the personal scope.
A central adjustment can be found in Art. 2 para. 1 of the AI Act. This standard imposes obligations on the "provider" of AI systems. Unlike the NLF, this does not refer to the "manufacturer". In Art. 3 No. 3 of the AI Act, the legislator defines a provider as any "natural or legal person, public authority, agency or other body that develops an AI system [...] or has it developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge". This definition gives rise to criticism in several respects.
It should be noted that the "provider" is already subject to obligations before placing its system on the market or putting it into operation. For example, the provider of a high-risk AI system is required to register in an EU database in accordance with Art. 49 para. 1 of the AI Act before placing its system on the market or putting it into service.
The role of the user is a special feature of the AI Act. Unlike under the NLF, they have a dual function. On the one hand, they are the subject of protection under the new regulation, but on the other hand they are also the obligated party. What may initially seem ambivalent, on closer inspection this takes account the fact that the dangers posed by AI are not only realised through the provision of such systems, but above all through their use. Consequently, Art. 5 of the AI Act prohibits not only the placing on the market and putting into service of certain systems, but also their use. However, it should be noted that the EU approach to the AI Regulation is based on a narrow understanding of the legal concept of user. According to the definition in Art. 3(4) of the AI Regulation, it is initially assumed that any natural or legal person who uses an AI system under his or her own responsibility is a user within the meaning of the AI Regulation.
However, persons who use the AI system as part of a personal and non-professional activity are excluded. This should primarily include consumers. If a user violates this, they can expect a fine of up to EUR 35 million in accordance with Art. 99 para. 3 of the AI Act. It is worth noting in this context that no distinction is initially made in the user obligations as to whether the user is an entrepreneur or a consumer. This distinction only becomes relevant when determining the standard of care or when determining the amount of the administrative fine. If the prohibited user is an undertaking, a higher fine may be imposed if 7% of the total worldwide annual turnover in the previous financial year exceeds the actual fine limit.
According to the AI Act, the decisive factor for the temporal point of reference for the obligations is the time of "making available" on the market and "placing on the market" as the first time of making available.
As the concept of making available on the market is a core element of the NLF, the discussions and developments in the interpretation of the terms in the context of the other CE standards, such as the Machinery Regulation, as well as the new version of the Blue Guide (Guide on the implementation of EU product rules 2022 - 2022/C 247/01) can be referred to.
For this reason, the discussions about whether or not it is necessary to obtain the power of disposal can be shortened. This is because, at the latest since the new version of the Blue Guide, no physical handover or transfer of power of disposal is required for making available on the Union market. Rather, the following applies: "The making available of a product supposes an offer or an agreement (written or verbal) between two or more legal or natural persons for the transfer of ownership, possession or any other right concerning the product in question after the stage of manufacture has taken place. The transfer does not necessarily require the physical handover of the product." (Blueguide of the Commission No. 2.2)
And if the AI system is made available via distance or online sales (which is often the case for AI systems), the product is already considered to be made available on the market when the offer is addressed to end-users in the Union (Blueguide of the Commission No. 2.4). An activity that is in some way directed at a Member State may already be sufficient, which leads to a significant temporal forward shift and geographical extension of the application of the AI Act. The AI Act only differs from the NLF with regard to the term "putting into service". According to the Blue Guide (Section 2.6), commissioning is deemed to have taken place upon the "first intended use". In contrast, an AI system is deemed to have been put into service under the AI Act if it is made available by the provider in the Union for first use by the operator or for own use in accordance with its intended purpose.
In principle, providers must ensure that the system they provide undergoes a conformity assessment procedure. For the high-risk AI system, this is regulated in Art. 16 lit. f of the AI Act. The legislator thus gives the provider the choice of carrying out the assessment procedure themselves or having it carried out by a third party. An exception to this principle applies to high-risk AI systems that are intended to be used for real-time remote biometric identification or for subsequent remote biometric identification of natural persons. The conformity assessment procedure for these systems is based on the stricter requirements of Annex VI or VII in accordance with Art. 43 para. 1 of the AI Act.
Downstream actors in the distribution chain, above all importers and distributors, are not obliged under the regulation to ensure that a conformity assessment procedure is carried out on the product they import or trade. They are only prohibited from placing on the market or putting into service systems that have not undergone a conformity assessment procedure. Contractually, however, suppliers are of course free to instruct the importer or distributor to carry out the conformity assessment procedure for themselves. However, providers are of course free to contract the importer or distributor to carry out the conformity assessment procedure on their behalf.
Distributors, importers, deployers or other third-parties may be deemed to be providers of an AI system if they affix their name or trademark to the system, make a substantial modification to a system that has already been placed on the market or put into service, or change the intended use of a system. The actual offering is irrelevant, as the third-party's status as a provider is fictitious in these cases ("shall be considered"). It should be noted in particular that the dealer, importer, deployer or other third-party does not take the place of the actual provider as a further party liable, but actually replaces the latter as the party primarily responsible. This is expressly stipulated in Art. 25 para. 1 sentence 1 of the AI Act. According to this provision, a provider loses its status as a provider under the AI Act if the dealer, importer, deployers or other third-party is to be regarded as a provider. Dealers, importers, installers or other third-parties in particular must therefore be vigilant when structuring contractual relationships with the actual provider of the AI system.
According to Art. 16 lit. b of the AI Act, the provider of a high-risk AI system must confirm that it assumes responsibility for the conformity of the product with all relevant EU regulations by affixing a CE marking. Art. 48 para. 1 of the AI Act refers to Art. 30 of the Accreditation Regulation. Strictly speaking, these regulations can only be applied "accordingly". According to Art. 30 para. 1 of the Accreditation Regulation, only the manufacturer or its authorised representative is permitted to affix a CE marking to the product. However, the concept of manufacturer is foreign to the AI Act. The primary subject of obligation is the provider. If the AI system is part of a physical object, for example by being built into a product, there is nothing new to consider with regard to affixing the CE marking.
However, the situation is different for systems provided purely digitally. Art. 48 para. 2 of the AI Act states: "For high-risk AI systems provided digitally, a digital CE marking shall be used, only if it can easily be accessed via the interface from which that system is accessed or via an easily accessible machine-readable code or other electronic means." The consequence of this provision is that a high-risk AI system may be placed on the market or put into service within the EU under certain conditions, even without affixing a CE marking. In line with this, recital 129 states that a digital CE marking "should" be used for high-risk AI systems that are only provided digitally.
If the CE marking is required for purely digitally supplied systems, it must be affixed "visibly, legibly and indelibly". The requirements for durability, good visibility, and legibility still need to be specified. In this context, numerous questions arise in individual cases: What size must the CE marking have? Where should it ideally be positioned - on a website or in the system? Does it have to be displayed in the system itself or is it sufficient if the label is displayed on the website from which the download is made? In the front end of a system, is it sufficient to display the CE marking only on a subpage that the user can access with a few clicks, or must the marking always be visible to the user and therefore be displayed in a footer or banner integrated into the system, for example?
According to Art. 72 para. 1 of the AI Act, the provider is obliged to continue monitoring the system after it has been placed on the market and to document this monitoring. The courts will have to specify in more detail which requirements are to be placed on the monitoring options. In any case, however – especially in the case of generative AI systems – it must be ensured that these systems cannot independently evade monitoring and that the provider also retains the options under Art. 20 para. 1 AI Act, in particular that the provider can shut down the system at any time. In this respect, the provider - unlike the manufacturer under the NLF for its product - must retain the ability to access the system it provides.
Certain AI systems must undergo a conformity assessment procedure before being placed on the market or put into service. According to Art. 43 para. 4 sentence 1 of the AI Act, high-risk AI systems that have already undergone a conformity assessment procedure must undergo a new procedure "in the event of a substantial modification". It is not clear from the provision itself or from the associated recitals when a substantial modification is to be assumed.
Here, however, it will again be possible to refer to previous developments and findings from the NLF. According to the Blue Guide, a substantial modification exists in particular if the product (or in this case the AI system) i) has its original performance, purpose or type is modified, without this being foreseen in the initial risk assessment, or ii) the nature of the hazard has changed or the level of risk has increased as a result of the change. In practice, this is likely to be particularly problematic with regard to generative AI systems. These systems are characterised by the fact that they generate new content and their application possibilities are therefore virtually unlimited. Generative AI systems are based on machine learning, which means that they continue to develop independently based on data - without the need for new programming by a human. The underlying system is therefore constantly changing. In practice, it would be unrealistic to carry out a new conformity assessment procedure every time there is the slightest change to the system. Against this background, it is perfectly understandable that the requirement to repeat the assessment procedure only applies to "substantial" modification. However, a separate decision must be made for each individual case as to the stage at which the system has developed so much that a "substantial modification" can be assumed. In the case of high-risk AI systems in particular, it could be assumed that a substantial modification has occurred in any case if the reference point on the basis of which the system was categorised as high-risk changes. In this context, particular attention should be paid to Annex III of the AI Act. If, for example, a system was high-risk because it captures biometric data, a new conformity assessment procedure will probably be necessary if the system is also to be used later in the area of critical infrastructure. However, it will not always be possible to focus solely on the intended purpose of the individual system, as generative AI systems - such as ChatGPT - are often not designed for a specific area of application, but can be used in a variety of ways.
Providers of high-risk AI systems who believe that the system they have provided does not (or no longer) meet the requirements of the AI Act are obliged under Art. 16 lit. j) of the AI Act in conjunction with Art. 20 para. 1 sentence 1 of the AI Act to "immediately take the necessary corrective actions" and, if necessary, to withdraw the system, to take it out of service or to recall it, insofar as this is appropriate. In practice, this means that providers must retain the ability to access purely digitally provided systems - including those offered for download on the internet. In the absence of a provision to the contrary, it can be assumed that the provider must not be dependent on the cooperation of the user for these corrective measures. It must therefore remain technically possible for the provider to switch off the system or update it. In some cases, the possibility of updating the system will correspond to the new update obligations introduced in the German Civil Code. However, in the case of security-related updates that are necessary to ensure (continued) compliance of the system with EU regulations, it will probably not be sufficient from a product safety perspective to simply provide the user with the update. Rather, in these cases it must be ensured that the update is installed automatically - if necessary, via remote access to the system. This circumstance should be taken into account in the contractual relationship between provider and user.
The main features of the EU's AI regulation follow the already familiar and proven principles of product safety law. The regulation therefore involves fewer innovations than many might expect.