AI is the omnipresent buzzword in the legal industry. Consequently, the EU AI Act is one of the main items on the radar of many legal professionals. But what about this other piece of legislation that was a very hot topic back in 2018, the General Data Protection Regulation (GDPR)?
After over seven years, GDPR compliance has become familiar territory for many in-house counsel and other professionals. There are a number of similarities and interplays between the newly introduced AI Act and the GDPR, as well as explicit references to the GDPR in the AI Act. In this article, we will discuss some of the most significant interactions.
General
It's striking the extent to which the overall shape of the AI Act is influenced by the GDPR. Many of the data principles (transparency, accuracy, security) are mirrored in the AI Act which, like the GDPR, takes a risk-based approach.
The obvious intersection in terms of AI Act compliance is due to the fact that developing and using AI systems likely involves training AI models on data, some of which will almost certainly be personal data. which must be processed in compliance with the GDPR. This requires appropriate data privacy safeguards, not only internally, but also in the relationship with suppliers and other parties involved in the AI ecosystem.
Significantly Article 47 AI Act requires providers of high-risk AI systems to draw up a declaration of conformity which must include a statement of GDPR compliance where the system includes personal data (Annex V) and in addition, in many Member States, the Data Protection Authority will also be the AI Act market surveillance authority.
Transparency
The GDPR and the AI Act each have their own transparency regimes. In both cases, the rationale is that individuals need to receive adequate information on how their personal data is processed/how the use of AI affects them.
- Under the GDPR (Articles 12-14) data subjects must be given clear and accessible information about the processing of their personal data, including purposes, legal basis, recipients, storage periods and data subject rights.
- Under the AI Act, several transparency obligations apply. For example, Article 13 requires that deployers be given instructions for use of high-risk AI systems. In addition, Article 50 requires deployers to inform natural persons that they are interacting with an AI system (unless obvious from context).
Risk management
From a risk-management perspective, the GDPR and the AI Act both employ a risk-based approach in terms of compliance. However, there is a fundamental difference between the two in terms of the stage at which risk is addressed. In general, the GDPR provides for a broader range of discretion in weighing/balancing interests.
- The GDPR is risk-based, requiring prior and ongoing assessment of risks associated with the processing of personal data. This approach sparks the need for technical and organisational measures to address risk proportionately
- Under the AI Act, AI-systems are categorised into different tiers of risk: unacceptable risk which is prohibited, high-risk or low/minimal risk. The most onerous obligations are attached to high-risk AI-systems which are required to comply with comprehensive obligations relating to risk management, assessment and mitigation. Specific measures also apply to general purpose AI models (GPAI) and GPAI models with systemic risk.
Accountability
Both the GDPR and the AI Act require accountability. Broadly this involves documenting your various steps, processes and policies in order to demonstrate compliance
- Where the GDPR requires data processing agreements to be put in place between controllers and (sub-)processors, the AI Act requires sufficient contractual safeguards to be put in place between the various roles as defined in the AI Act
- Whereas the GDPR requires the documentation of elements including Data Protection Impact Assessments (DPIAs), and processing records, the AI Act requires more elaborate documentation of eg development and design choices for high-risk AI systems and GPAI models to be able to demonstrate accountability.
DPIA and fundamental rights assessments
Both the GDPR and the AI Act require risk assessments.
- Under the GDPR, in specific instances controllers must carry out a Data Protection Impact Assessment (DPIA)
- Article 26(8) AI Act states that where applicable, deployers of high-risk AI systems must use information provided under Article 13 of the AI Act to comply with their GDPR obligation to carry out a DPIA
- In addition, the AI Act requires certain deployers to carry out a “fundamental rights impact assessment” (FRIA) before deploying high-risk AI systems mentioned in Annex III of the AI Act. Where the deployer has already carried out a GDPR DPIA which meets any of the AI Act FRIA requirements under Article 27, the FRIA will complement it (Article 27(4) AI Act).
Processing of sensitive personal data
The AI Act (Article 10(5)) exceptionally allows processing of sensitive personal data, but only if it is strictly necessary for the purpose of ensuring bias detection and correction in high-risk AI-systems and under certain conditions. These apply in addition to the GDPR Article 9 requirements which provide exceptions to the general prohibition on processing sensitive (special) personal data.
Automated decision-making and human oversight
Both the AI Act and GDPR have a form of a human oversight mechanism.
- Article 22 GDPR gives data subjects the right not to be subject to a solely automated decision which has a legal or similarly significant effect on them. They have the right to ask for a new decision that ís subject to human intervention if they object to such a solely automated decision
- The AI Act has a stronger regime in place in Article 14, under which high-risk AI systems must be designed with human-machine interface tools enabling effective oversight, and deployers must assign competent oversight personnel. For example, humans must be able to monitor operation of the AI system and correct outcomes or even decide not to use the system or employ a 'stop' button as an emergency measure. In addition, under Article 86, individuals have a right to explanation where deployers take decisions on the basis of output from a high-risk AI system (subject to limited exceptions).
Incident reporting
Both regimes implement an incident reporting system, allowing for filing initial as well as subsequent reports if the full scope or extent of an incident is not yet known at the required reporting time.
- Under the GDPR (Article 33), the main incident that can occur is a data breach. In principle, notification of such data breach should be made to the relevant Data Protection Authority (DPA) without undue delay and, where feasible, not later than 72 hours after the data controller has become aware of the breach and additional notifications to individuals may be required where the breach is likely to result in a high risk to their rights and freedoms
- Under the AI Act (Article 73), serious incidents should be reported to the regulatory authorities immediately after establishing a causal link between the AI system and the serious incident (or reasonable likelihood of such chance) and in any event, not later than 15 days. However, this ultimate deadline depends on the severity of the incident and is adjusted to two days in the case of widespread infringement or a serious and irreversible disruption of the management or operation of critical infrastructure; and ten days in the event of the death of a person.
Handling the dual compliance burden
This is just an overview of some of the areas where the concepts of the GDPR overlap with those in the AI Act. In terms of the compliance burden, however, there are limited situations in which compliance under one piece of legislation may be sufficient to comply with the other. What is more likely is that GDPR compliance processes can be expanded where necessary, for AI Act compliance purposes, However, for accountability purposes, it will be important to distinguish between GDPR and AI Act requirements.
Useful actions include:
- Map AI Act roles (provider, deployer, importer/distributor) with GDPR roles (controller/processor).
- Collect adequate information and process such information in relevant guidance for users/data subjects. Regularly review whether the information is up to date.
- Ensure that AI training data that contains personal data (which is often the case), is subject to adequate data privacy safeguards, both from a contractual and a technical perspective.
- Document AI development and monitoring so that you can demonstrate compliance, both during the development and the deployment phase.
- Ensure that the AI Act FRIA and GDPR DPIA are mapped and streamline the workload where appropriate. The GDPR DPIA should be conducted first using Article 13 AI Act information, followed by the FRIA.
- Verify whether processing of sensitive personal data in the context of the AI Act is permitted under each of the GDPR and the AI Act.
- Distinguish clearly between post-processing human intervention under the GDPR and human monitoring under the AI Act. The latter requires a broader set of skills and more detailed product knowledge.
- Implement proper timeline management within your organisation, taking into account the fundamental differences between the types of incident that can occur in relation to AI systems from both a GDPR and AI Act perspective. Ensure both sets of requirements are included in your breach preparedness planning.
- Understand who is (or are) your regulator(s) under the GDPR and the AI Act.