On 23 May 2025, the Higher Regional Court of Cologne dismissed the application by the Consumer Protection Organization of North Rhine-Westphalia seeking an injunction against Meta for training AI. Following a positive decision by the Irish Data Protection Authority, a German court has now also ruled in favour of Meta. However, the Hamburg Data Protection Commissioner remains critical.
Meta, which operates services such as Facebook and Instagram, announced in mid-2024 that it would use public posts from EU users for AI training purposes. Meta informed users that they could actively object by 27 May 2025. The announcement sparked controversial public debate. On the other hand, fundamental questions arose regarding the meaning and interpretation of European data law, in particular the General Data Protection Regulation (GDPR) and the Digital Markets Act (DMA). The relevant legal issues are of interest to all companies that want to use AI. The case generally concerns the permissibility of using existing data for training or improving AI – at least if it contains personal data.
Legal background
Anyone who processes personal data, such as Facebook posts, needs a legal basis. Meta relies on what is known as ‘legitimate interest’ with regard to AI training. This allows Meta to process data of affected users unless they object to the use (opt-out). Meta had informed users about this by email. Consumer protection associations, however, are calling for a different legal basis. They consider the consent of affected users as appropriate (opt-in). In particular, they argue that special categories of personal data are being processed, such as health data, for which ‘legitimate interest’ is not sufficient. In practice, opt-in would mean that Meta would not be able to use a large proportion of the data because many users would not actively consent. Conversely, however, the majority would not opt out. Meta's status as a designated gatekeeper under the DMA also plays a role in the discussion.
Contrary views of the data protection authorities
While the Irish Data Protection Authority, the lead authority for Meta, recently commented positively on Meta's plans in a press release, critical voices were still heard in Hamburg. After more than a year of proceedings before the Irish Data Protection Commission, in which Meta addressed the concerns raised by the data protection authority by making changes to its concept, the authority allowed Meta to proceed but reserves the right to evaluate the situation in October 2025. The European Data Protection Board, the association of all European data protection authorities, was also involved in the proceedings in order to achieve European harmonisation. Meta implemented certain improvements during the proceedings, such as improved transparency notices and easier-to-use objection forms.
However, the Hamburg Data Protection Commissioner (HmbBfDI) takes a different view. According to press reports, the HmbBfDI initiated urgent proceedings against Meta shortly before the start of Meta's AI training. Meta must respond by 26 May 2025. The HmbBfDI intends to prohibit Meta from providing AI training to German data subjects for at least another three months.
Proceedings before the Higher Regional Court of Cologne
The case is receiving particular attention due to the preliminary proceedings before the Higher Regional Court of Cologne. German and European consumer protection agencies consider Meta's actions to be unlawful. The Consumer Protection Organization of North Rhine-Westphalia therefore applied for a preliminary injunction (15 UKl 2/25) to stop the AI training, at least temporarily. It particularly criticised that Meta could not provide a suitable legal basis for the data processing and that it also violated the DMA. This is because Meta's AI training unlawfully combines personal data from different platforms.
The Higher Regional Court of Cologne rejected the application. In particular, Meta's interests in data processing outweighed the interests of the data subjects, among other things because Meta had taken effective measures that significantly mitigated the interference with the rights of the data subjects. Furthermore, there was no unlawful combination of personal data within the meaning of the DMA. Meta does not combine data relating to individual users.
Nevertheless, the comments made by the HmbBfDI are likely to be of particular interest for future data processing in Germany. The HmbBfDI was consulted in the proceedings as the competent German data protection authority. In particular, the HmbBfDI questioned whether it was necessary at all to process such large amounts of data as Meta intended. The HmbBfDI emphasised to TW its legal opinion that the ‘de-identification’ cited by Meta as a risk-mitigating measure – such as the removal of data such as vehicle registration numbers, credit card numbers and the like – cannot be cited without restriction as a risk-minimising measure in the context of the balancing of interests, because the processing of this data is not necessary in the first place. Ultimately, however, this is a rather dogmatic question.
In addition, the HmbBfDI discussed numerous other issues during the six-hour hearing. For example, it took the view that public posts on Meta platforms are not public if they can only be viewed after logging in, and argued that the processing of data for the purpose of AI training was in any case not foreseeable for the data subjects in the case of ‘historical’ data. The argument that the data concerned was only stored for communication on the platform and not for training a separate AI model is likely to carry particular weight in future case reviews.
Furthermore, the right to object is irrelevant for some of the data subjects, namely those who do not have a Facebook account themselves, such as people who are depicted in a posted image.
Assessment and conclusion
The proceedings dealt with the fundamentals of European data (protection) law. For companies, the proceedings also show that there is still considerable uncertainty, particularly in the area of AI training. Above all, the authorities in the European Union are not dealing with the emerging legal issues in a uniform manner. This promotes uncertainty.
At the same time, however, the cautious approach taken by the Irish Data Protection Authority and the decision of the Higher Regional Court show that AI training with existing data is not per se inadmissible, even if personal data is involved. The correct preparation of data, supported by measures to protect those affected, is therefore the first step in counteracting conflicts with data protection authorities. Developers and lawyers should work closely together in this regard.
Many thanks to Christian Zander, who followed the proceedings live and reported back to the authors.
Last update: 26 May 2025.