Co-autor: Paul Friedl
Introduction
The duty for market participants established outside of the EU to designate an official “representative” on EU territory is a cornerstone of EU digital and product safety regulations, appearing across multiple frameworks including the GDPR, Digital Services Act, NIS2 Directive, the Data Governance Act and the the Medical Devices Regulation. This obligation requires non-EU businesses operating in the EU to establish a readily accessible point of contact for European stakeholders.
Following this established pattern, the new European Artificial Intelligence Act (AI Act) also mandates that certain operators based outside the EU must appoint an 'authorised representative.'
Concerned operators: providers of high-risk AI systems and general-purpose AI models
Under the AI Act, two types of operators based outside the EU have to appoint “authorised representatives”: providers of so-called high-risk AI systems and providers of general-purpose AI models (GPAI models). The first category includes all entities that develop and place on the EU market AI systems that are either (1) part of a product covered by certain EU legislation, such as the Machinery Directive or the Medical Device Regulation (the complete list of covered legislation can be found in the Act’s Annex I), or (2) intended to be used in one of eight designated “high-risk” areas, such as “education and vocational training” or “employment and workers’ management” (the complete list of covered areas can be found in the Act’s Annex III). GPAI models, on the other hand, are defined as AI models that are “trained with a large amount of data using self-supervision at scale, that display[…] significant generality and [are] capable of competently performing a wide range of distinct tasks […] and that can be integrated into a variety of downstream systems or applications” (Article 3(63)). Large Language Models are prime examples of GPAI models. Providers that release a GPAI model under a free and open-source license, however, are exempt from the obligation to appoint a representative.
The set of duties
Both types of AI providers are subject to a largely identical set of duties (Article 22 and Article 54, respectively). Centrally, they have to appoint, by written mandate, an authorised representative prior to making their system or model available in the EU. This mandate must empower the representative to perform (at least) the following four tasks:
- verify that the provider has drawn up the necessary technical documentation. According to Article 11, before placing a high-risk system on the market, providers have to draw up technical documentation that demonstrates the compliance of the AI system and includes descriptions of, inter alia, the system’s training data, architecture and testing procedures (Article 11, Annex IV). Providers of GPAI models, too, are subject to a duty to draw up technical documentation, the details of which are specified by Article 53(1)(a) and Annex XI. In the case of high-risk systems, the representative furthermore must be able to verify if the provider has drawn up the EU declaration of conformity (Article 47), in which the provider affirms that the system meets all applicable legal requirements. The representative also has to be able to verify if the provider has carried out an appropriate conformity assessment procedure (Article 43) to assess its conformity with the legal obligations under the AI Act. Finally, in the case of GPAI models, the representative must be able to verify that the provider has fulfilled all obligations under Articles 53 and 55, such as the obligations to perform model evaluations or to conduct adversarial testing. This can demand access to a wide range of internal documents;
- keep at the disposal of the competent authorities the provider’s contact details, the technical documentation, a copy of the EU declaration of conformity (only for high-risk systems), and, if applicable, an official certificate. These certificates are issued where an AI system has gone through a conformity assessment procedure by a notified body, which is inter alia required for providers of AI systems that are parts of products covered by the EU legislation listed in Annex I. These documents have to be kept available for a period of at least 10 years;
- provide competent authorities, upon a reasoned request, with all the information and documentation necessary to demonstrate the system’s or model’s conformity with the AI Act’s requirements, including the technical documentation and, where applicable, automatically generated logs (Article 21(1)), to the extent such logs are under the control of the provider;
- cooperate with competent authorities in any action the latter take in relation to the AI system or GPAI model. For providers of GPAI models, this duty extends to situations in which authorities want to take action against downstream AI systems which have integrated the GPAI model (e.g. a chatbot built on top of a Large Language Model).
In the case of high-risk systems, the representative also must ensure that the system is registered in the respective EU database pursuant to Article 49(1), or, if the registration was already carried out by the provider, ensure that the registered information is correct (limited to high-risk systems under Annex III).
What is more, the mandate must empower the representative to be addressed by the competent authorities on all issues relating to the regulation’s enforcement and, upon request by a such an authority, the representative must be able to provide a copy of the mandate in one the EU’s languages as indicated by the requesting authority. If the representative considers the provider to be acting contrary to its obligations under the AI Act, the representative must terminate the mandate and immediately inform the relevant market surveillance authority about the termination of the mandate and the reasons therefor. Where a high-risk system is imported to the EU, the importer must ensure that the provider has appointed an authorised representative pursuant to these obligations (Article 23(1)(d)).
Sanctions in the case of non-compliance
The failure to appoint a representative constitutes a case of “formal non-compliance” under Article 83(1), which, if unaddressed, shall lead the competent authorities to restrict or prohibit the concerned system (Article 83(1)) or GPAI model (Article 93(1)). It may also be penalised with administrative fines of up to EUR 15M or 3% of total worldwide annual turnover (Article 99(4)(b)). Notably, representatives themselves may also be subject to fines, as they too are covered by the term “operator” pursuant to Article 3(5) and listed in the provision on penalties (Article 99(4)(b)).
Summary and advice
The obligation to appoint an authorised representative under the AI Act represents a relevant compliance requirement for non-EU providers of high-risk AI systems and GPAI models. With severe penalties for non-compliance and the Act's broad territorial scope, companies providing AI systems or models that may affect EU users should assess their obligations and establish proper representation. These requirements are set to come into force in August 2026.