2025年1月17日
Co-autor: Paul Friedl
The duty for market participants established outside of the EU to designate an official “representative” on EU territory is a cornerstone of EU digital and product safety regulations, appearing across multiple frameworks including the GDPR, Digital Services Act, NIS2 Directive, the Data Governance Act and the the Medical Devices Regulation. This obligation requires non-EU businesses operating in the EU to establish a readily accessible point of contact for European stakeholders.
Following this established pattern, the new European Artificial Intelligence Act (AI Act) also mandates that certain operators based outside the EU must appoint an 'authorised representative.'
Under the AI Act, two types of operators based outside the EU have to appoint “authorised representatives”: providers of so-called high-risk AI systems and providers of general-purpose AI models (GPAI models). The first category includes all entities that develop and place on the EU market AI systems that are either (1) part of a product covered by certain EU legislation, such as the Machinery Directive or the Medical Device Regulation (the complete list of covered legislation can be found in the Act’s Annex I), or (2) intended to be used in one of eight designated “high-risk” areas, such as “education and vocational training” or “employment and workers’ management” (the complete list of covered areas can be found in the Act’s Annex III). GPAI models, on the other hand, are defined as AI models that are “trained with a large amount of data using self-supervision at scale, that display[…] significant generality and [are] capable of competently performing a wide range of distinct tasks […] and that can be integrated into a variety of downstream systems or applications” (Article 3(63)). Large Language Models are prime examples of GPAI models. Providers that release a GPAI model under a free and open-source license, however, are exempt from the obligation to appoint a representative.
Both types of AI providers are subject to a largely identical set of duties (Article 22 and Article 54, respectively). Centrally, they have to appoint, by written mandate, an authorised representative prior to making their system or model available in the EU. This mandate must empower the representative to perform (at least) the following four tasks:
In the case of high-risk systems, the representative also must ensure that the system is registered in the respective EU database pursuant to Article 49(1), or, if the registration was already carried out by the provider, ensure that the registered information is correct (limited to high-risk systems under Annex III).
What is more, the mandate must empower the representative to be addressed by the competent authorities on all issues relating to the regulation’s enforcement and, upon request by a such an authority, the representative must be able to provide a copy of the mandate in one the EU’s languages as indicated by the requesting authority. If the representative considers the provider to be acting contrary to its obligations under the AI Act, the representative must terminate the mandate and immediately inform the relevant market surveillance authority about the termination of the mandate and the reasons therefor. Where a high-risk system is imported to the EU, the importer must ensure that the provider has appointed an authorised representative pursuant to these obligations (Article 23(1)(d)).
The failure to appoint a representative constitutes a case of “formal non-compliance” under Article 83(1), which, if unaddressed, shall lead the competent authorities to restrict or prohibit the concerned system (Article 83(1)) or GPAI model (Article 93(1)). It may also be penalised with administrative fines of up to EUR 15M or 3% of total worldwide annual turnover (Article 99(4)(b)). Notably, representatives themselves may also be subject to fines, as they too are covered by the term “operator” pursuant to Article 3(5) and listed in the provision on penalties (Article 99(4)(b)).
The obligation to appoint an authorised representative under the AI Act represents a relevant compliance requirement for non-EU providers of high-risk AI systems and GPAI models. With severe penalties for non-compliance and the Act's broad territorial scope, companies providing AI systems or models that may affect EU users should assess their obligations and establish proper representation. These requirements are set to come into force in August 2026.