19. September 2023
In April 2021, the European Commission unveiled the EU's inaugural regulatory framework for AI, categorizing AI systems based on the risk they present. The different levels of risk will mean more or less regulation accordingly. This regulatory framework on Artificial Intelligence (AI) has already sent ripples across all tech and business sectors. For entrepreneurs navigating this new landscape, understanding the nuances of envisaged regulation, especially concerning high-risk AI systems, will be crucial. This article breaks down some essentials with respect to High-Risk AI Systems.
What will be qualified as high-risk AI system?
A classification as a “high-risk system” is based on two criteria:
- Biometric identification,
- Management and operation of critical infrastructure,
- Education,
- Employment,
- Access to public services and first response services and healthcare,
- Assessment of creditworthiness
Currently, many applications of AI technology do, to some extent, coincide with areas that fall under harmonized legislation by the EU. For many businesses, this will impose additional costs in compliance issues.
Regulatory obligations for high-risk AI systems:
High-risk AI systems are subject to strict requirements:
1. Risk management (Article 9): Providers must establish a risk management system which should continuously assess and mitigate risks throughout the AI system's lifecycle.
2. Data governance (Article 10): Companies using high-risk AI systems need to ensure they use top-quality data and manage it properly. This means the data should be accurate, complete, and unbiased, and if it involves processing of sensitive personal information in exceptional cases, further data safety measures are needed.
3. Technical Documentation (Article 11): Providers must maintain detailed technical documentation about the AI system before the system is placed on the market or put into operation. This includes its programming, algorithms, datasets, and more. The technical documentation should confirm that the respective product meets the requirements for high-risk Ai systems.
4. Record-keeping obligations (Article 12): High-risk AI systems must be designed and developed with functional features that enable automatic recording of operations and events ("logging") during their operation. Logging ensures that the operation of the AI system is traceable throughout its life cycle.
5. Transparency (Article 13): High-risk AI systems shall be designed and developed in such a way that their operation is sufficiently transparent to allow users to appropriately interpret and use the results of the system. They should also be provided with information, among others, on the system's intended purpose, the grade of cybersecurity, its capabilities, and limitations.
6. Human oversight (Article 14): High-risk AI systems must be designed in such a way that they can be effectively supervised by individuals throughout their use, including with suitable human-machine interface tools. Human oversight aims to prevent or minimize risks to health, safety, or fundamental rights that may arise from the intended use or reasonably foreseeable misuse of a high-risk AI system. This can include human confirmation of AI outputs or a full human review.
7. Robustness, accuracy, and security (Article 15): High-risk AI systems are designed for consistent accuracy, robustness, and cybersecurity. Their accuracy metrics are to be documented in detail in user manuals. They must resist internal and external errors, especially during human interactions. Systems that learn post-deployment should mitigate biases from previous results. Additionally, they must be safeguarded against unauthorized alterations, with cybersecurity tailored to specific risks, including data poisoning and adversarial attacks.
Obligations of providers, operators, and other participants
Articles 16 to 29 lay out the obligations of various parties involved in the AI value chain:
1. Obligations (Article 16): Providers must ensure, among others, their high-risk AI systems comply with the requirements mentioned above, possess a quality management system, create technical documentation, retain automatically generated logs, undergo conformity assessment before deployment, register as required, and take corrective actions if non-compliance is detected.
2. Quality management system (Article 17): Providers must establish a quality management system ensuring compliance with the regulation. This system should cover design, quality control, data management, risk management, post-market surveillance, and more.
3. Technical documentation (Article 18): Providers must create technical documentation with a certain minimum level of information specified by law in the AI-Act, such as general and detailed description of the AI system, its components and development process.
4. Conformity assessment (Article 19): Before deployment, providers must ensure their systems undergo a conformity assessment.
5. Automatically generated logs (Article 20): Providers must retain logs generated by their high-risk AI systems, insofar as these protocols are subject to their control.
6. Corrective actions (Article 21): Providers must take necessary corrective actions (at most, product withdrawal is to be considered) if their high-risk AI systems doesn't comply with the AI act.
7. Information obligation (Article 22): If a high-risk AI system poses a risk, providers must promptly inform relevant national authorities or the notified body that has issued a certificate for the high-risk AI system.
8. Cooperation with authorities (Article 23): Providers must cooperate with competent national authorities, providing all necessary information upon request in the respective official language. Upon reasonable request of the national authorities, the automatically generated logs must also be disclosed, provided that the provider has control over them.
9. Obligations of product manufacturers (Article 24): If a high-risk AI system is combined with a product, that falls under certain harmonized legal acts of the EU (i.e. products that usually also require a declaration of conformity), the product manufacturer assumes responsibility for the AI system's compliance, same as the provider of the high-risk ai system.
10. Authorized representatives (Article 25): Non-EU providers must designate a representative within the EU, before deployment of the high-risk AI system, unless an importer can be identified.
11. Importers' obligations (Article 26): Before introducing a high-risk AI system to the market, importers must ensure the system has undergone the necessary conformity assessment, possesses the required technical documentation, and is appropriately labelled. If importers believe the system doesn't comply with regulations or poses a risk, they must rectify the issue before distribution and notify the system provider and relevant authorities. Importers are also responsible for providing their contact details on the system or its packaging and must cooperate with national authorities, ensuring storage and transport conditions don't compromise the system's compliance.
12. Distributors' obligations (Article 27): Before distributing a high-risk AI system, distributors must ensure it meets regulatory requirements, including appropriate labelling and documentation. If dealers identify non-compliance or potential risks, they must rectify the issues, notify relevant entities, and cooperate with national authorities, ensuring proper storage and transport conditions.
13. Obligations of distributors, importers, operators, or other third parties (Article 28): If dealers, importers, operators, or other third parties introduce a high-risk AI system under their brand, modify its intended purpose, or make significant changes to it, they are considered providers under the AI act and are subject to provider obligations. In such cases, the original provider of the AI system is no longer recognized as the provider for the purposes of this regulation.
14. Operators’ obligations (Article 29): Operators of high-risk AI systems must operate these systems and provide for technical and organisational measures according to the provided instructions and ensure the input data aligns with the system's intended purpose. If operators suspect the system poses a risk or malfunctions, they must inform the provider or dealer, and halt its use. Furthermore, operators must retain automatically generated logs for a period consistent with its purpose and legal obligations as long as it is under their control.
Financial implications for companies:
Non-compliance can result in hefty fines for companies. They could face penalties between EUR 10,000,000 and EUR 40,000,000, or 1% to 7% of their previous fiscal year's global sales. The specific amount will be determined by the type and severity of the violation. Moreover, the costs of ensuring compliance, like setting up risk management systems, maintaining technical documentation, and ensuring data governance, can in itself be significant.
The fines are designed to be effective, proportionate, and dissuasive, considering the nature, gravity, and duration of the infringement, among other factors.
However, it's not all gloom. Proper compliance can lead to increased consumer trust, opening new markets and opportunities. Moreover, understanding and integrating these regulations can offer a competitive edge, especially in the burgeoning European AI market.
Conclusion:
The EU's AI regulation is a comprehensive attempt to ensure that AI systems are used responsibly and safely. For entrepreneurs, understanding these regulations will be the first step in navigating this new landscape successfully.