What's the problem?
For some time, businesses have been concerned over how the General Data Protection Regulation (GDPR) applies to the development and use of artificial intelligence (AI) technologies. Whether it is training an AI model with personal data, or considering how personal data generated by an AI can be used in a GDPR compliant way, the principles and requirements of the GDPR do not neatly integrate with the burgeoning world of AI. One of the most fundamental requirements under GDPR is to demonstrate a lawful basis (under Article 6) to collect and use personal data as part of any AI technologies.
It's generally understood that the most realistic lawful basis under GDPR that businesses can rely on is legitimate interest, given the unsuitability of the other lawful bases in most circumstances (and for the purposes of this article, we're not considering the implications of special category data being processed). However, to rely on legitimate interest as a lawful basis, a business needs to be able to demonstrate why it is entitled to do so in the technical sense required by the GDPR. Typically, this is set out in a legitimate interest assessment (LIA).
What does the GDPR say about legitimate interest?
To rely on the legitimate interest lawful basis, an organisation must be able to demonstrate that the processing of the personal data in question is necessary for the purposes of its legitimate interest or those of a third party, except where such interest is overridden by the interest or fundamental rights and freedoms of the individual concerned (and such rights and interests are more pronounced where the individual is a child).
The legitimate interest lawful basis boils down to being able to identify:
- a legitimate interest which can be a business interest but should not be anything contrary to law
- that the processing of personal data is necessary for that legitimate interest; this doesn't need to be strictly necessary but there must be a clear connection between the personal data that the organisation intends to process and the legitimate interest itself
- that the balance of interests between the organisation (or any third party to whom the data is disclosed) and the individual is such that the individuals' interests and rights do not override those of the organisation or third party.
This rationale is then set out in the LIA, a process that the GDPR does not explicitly require, but that European regulators expect. There is no one template for carrying out an LIA under the GDPR, but regulators may provide their own templates.
Of course reliance on legitimate interest is not simply achieved by a documented LIA. Controllers must also be transparent in their privacy notice about the legitimate interests they are relying on and must provide individuals with a simple means to object.
How do regulators approach reliance on legitimate interest to process personal data as part of AI development and use?
The European Data Protection Board (EDPB) Opinion 28/2024 dated 17 December 2024 on certain data protection aspects related to the processing of personal data in the context of AI models confirms that processing personal data in reliance on legitimate interest is possible in AI development and deployment. It's also notable (and surprising) that the EDPB does not say unequivocally that use of an AI model trained on unlawfully processed personal data is itself automatically unlawful. Instead, the repeated approach throughout the opinion is that data protection authorities should consider reliance on legitimate interests in the context of AI on a case by case basis. What will be important is how businesses demonstrate and support their arguments for reliance on legitimate interest in their LIA.
This was recently exemplified by the Irish Data Protection Commission's EUR310 million fine of Linkedin. A decision published in October 2024, which concerned online behavioural analysis and targeted marketing. One of the Commissioners, Dale Sutherland, who commented on this decision stated that "I think the really important point here is…that for legitimate interest to be appropriate and correctly used, the assessment needs to be really robust, and it needs to really tease out the choices that have been made by organizations to ensure that the legitimate interest" of the organisation does not outweigh the rights and freedoms of the individual.
In other words, regulators will expect a rigorous examination of the interests and risks associated with the processing of personal data for legitimate interest and the balancing of interests involved, which includes when data is used for AI development and use. And while regulators have issued guidance on reliance on legitimate interest when used in an AI context, there is little guidance on how businesses should carry out the required and associated balancing exercise.
What has the Information Accountability Foundation produced?
The Information Accountability Foundation (IAF) is an independent think tank that promotes data accountability by design and advances responsible AI governance. It has produced a draft legitimate interest solution framework that is specifically designed to help businesses draft a suitable LIA when developing and using AI. As part of the IAF's project work considering the use of legitimate interest in an AI world, the IAF committed to developing a draft legitimate interest solution framework to provide a process that businesses could use and which regulators would also expect. Their draft LIA was made available in November 2024 and was developed with input from businesses, regulators, academics and civil society.
Concerns raised by regulators (and mentioned by the IAF) include that businesses do not fully consider or understand the interests of individuals and their point of view when arguing that their processing can be justified on the basis of legitimate interest. Businesses also often fail to provide sufficient evidence or explanation for their use of personal data. A key problem identified is how businesses should go through the balancing of interests assessment (the third limb of the LIA) and evidence their conclusions.
The notion from the IAF, therefore, was to create a 'multi-dimensional balancing process' which helps, not only with meeting GDPR obligations, but also with similar assessments required by the EU AI Act, US state privacy laws and other emerging AI and digital laws. In order to produce the draft LIA, the IAF developed a directory of rights, interests, stakeholders and consequences, which can be used to consider and map risks. The underlying goal was to help businesses document the legitimate interest process when using AI in such a way that gives confidence to the regulator community.
The IAF's three-dimensions methodology is:
- the individual rights and benefits (in a fundamental rights-based system) or established legitimate interests (in legal systems when fundamental rights are not established)
- the full range of stakeholders whose rights or interests are involved, and the impact to their interests, and
- the adverse processing impacts that may be involved (to all stakeholders), their likelihood and level of consequence, recognising that adverse processing impacts sometimes only may be reduced and not eliminated.
What does the draft LIA look like?
The IAF explains that the draft LIA is structured to take account of the UK Information Commissioner's Office three-part test for legitimate interest (the purpose, necessity and balancing test), the decision by the Court of Justice of the European Union (CJEU) in the recent Dutch tennis association case, as well as EDPB guidance and draft guidance from the French data protection authority, the Commission Nationale de l’Informatique et des Libertés (CNIL). But it also includes requirements from Colorado and California around use of AI, plus certain AI governance controls.
The IAF's approach specifically seeks to broaden the discussion around interests and rights, so that the focus in an LIA is not just on data protection rights and autonomy, but also on broader societal rights which can be affected by the rise of AI. It also indicates how information security management processes can be used to assist, but not replace in full, the balancing section of the draft LIA to weigh stakeholders' interests, benefits, risk and mitigations.
The draft LIA operates as an example of what can be provided (where completed) by a business to a regulator where that business needs to demonstrate why it can rely on legitimate interest as a lawful basis for use of personal data in the context of its use of AI. It considers the range of interests under EU and UN legal instruments and provides a process to weigh the risks/harms and effectiveness of mitigations.
The sections of the draft LIA itself follow the general outline of a traditional LIA but provide significant further considerations which will be useful for a business as it documents why its use of personal data for AI purposes is a legitimate interest. For instance, the draft LIA flags that the 'necessity' element of the LIA assessment will need to be reassessed at key stages of the AI development data processing lifecycle since elements may change as the development lifecycle advances. Furthermore, the draft LIA includes new sections dealing with identifying the nature of the AI systems and automated decision making so that the business documents this aspect in detail in the LIA, as well as setting out its AI governance measures eg fairness, traceability, model training, model testing and equal treatment. The draft LIA also sets out examples of stakeholders and benefits and risks to consider as well as safeguards to mitigate risk (in Appendix II), plus links to examples of multi-dimensional/stakeholder balancing output tools that businesses can use to help them manage risk.
How helpful is the IAF draft LIA?
The IAF draft LIA provides a comprehensive and global framework for businesses to set out their LIA when using personal data in an AI context. Since it was developed with regulator and business input, it should be both a practical solution for businesses and a welcome approach when provided to regulators. Use of AI is only likely to come under greater scrutiny from data protection authorities in future months. A business that wishes to rely on legitimate interest would do well to consider the IAF draft LIA as a basis for setting out its arguments justifying legitimate interest as the appropriate GDPR lawful basis for its processing activities.