As hyper-realistic deepfakes proliferate across the internet - from financial fraud schemes to threats to democratic stability - the EU is taking steps to tackle the surge of AI-generated synthetic content.
Article 35 of the EU Digital Services Act (DSA) and Article 50 EU AI Act (AI Act) are the key legislative instruments against risks to society imposed by deepfakes. Article 35 DSA requires the largest online platforms to proactively mark deepfakes distributed on their platforms. Article 50 AI Act similarly obliges all deployers of AI to disclose that image, video, or audio deepfakes are AI-generated. But can transparency requirements and platform technical enhancements effectively tackle the proliferation of deceptive content? We explore the complex legal and technological challenges that will help to shape the integrity of our digital information ecosystem.
What are deepfakes?
A deepfake (or in AI Act terms, deep fake) under the AI Act is AI-generated or manipulated image, audio or video content that resembles real people or objects and gives a false appearance of authenticity (Article 3 (60) AI Act). The DSA does not use the term, but Article 35(1)(k) DSA imposes obligations on providers of very large online platforms (VLOPs) in relation to the same materials. It requires the VLOP to mark content which “constitutes a generated or manipulated image, audio or video that appreciably resembles existing persons, objects, places or other entities or events and falsely appears to a person to be authentic or truthful”. This means the DSA and AI Act overlap in their treatment of this type of content, except that in a DSA context a deepfake does not necessarily have to be AI-generated, even though it often will be.
Deepfakes are often created by using advanced AI like Generative Adversarial Networks (GANs) to create convincing audio-visual forgeries. The risks posed by deepfakes are multifaceted, ranging from the proliferation of non-consensual pornography, which constitutes the vast majority of all deepfakes, to largescale financial fraud - very often through scam calls - and the destabilisation of democratic processes.
The Digital Services Act and deepfakes
From risk assessment to mitigation: Articles 34 and 35
The DSA establishes a tiered system of obligations, imposing the most stringent duties on platforms with over 45 million monthly active EU users. For these Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) (jointly referred to in this article as VLOPs for ease), the Regulation moves beyond reactive content moderation. For example, Article 34 of the DSA mandates that they conduct annual assessments to identify significant "systemic risks" stemming from their services. These risks explicitly include the dissemination of illegal content, negative effects on fundamental rights such as human dignity and privacy, and adverse impacts on civic discourse and electoral processes - categories under which deepfakes pose direct and significant threats.
Article 35 obliges VLOPs to implement "reasonable, proportionate and effective mitigation measures" tailored to the specific risks identified. Measures must be both effective in practice and proportionate in order to prevent over-censorship that could undermine freedom of expression.
A transparency-first approach: Article 35(1)(k)
While the general duties of Article 35 cover the potential harms of deepfakes, a specific sub-provision, Article 35(1)(k), is widely seen as main focus of deepfake regulation within the DSA. It provides that mitigation measures may include ensuring that generated or manipulated media which "appreciably resembles existing persons... and falsely appears to a person to be authentic or truthful is distinguishable through prominent markings".
This clause establishes a 'transparency-first' approach. For a deepfake that is deceptive but not otherwise illegal under national law, the principal remedy is not removal, but labelling. This represents a deliberate policy choice to create transparency rather than compelling platforms to delete all deceptive but lawful content.
The Article 35 DSA obligation is technologically demanding. To meet the "effective" mitigation standard, VLOPs must independently develop robust systems to detect unlabelled deepfakes, thrusting them into what may be called a perpetual and costly technological arms race against ever more sophisticated generation techniques.
The AI Act and deepfakes
Similarly to the DSA, the AI Act also imposes transparency requirements on deepfakes. The scope of addressees is different, though: while the DSA focuses on the platform, the AI Act requires the “deployer” of the AI system (broadly the user) generating the deepfake to disclose that the content has been artificially generated or manipulated (Article 50(4) AI Act). Notably, while the DSA focuses on certain very large actors, the AI Act does not implement a threshold, and all deployers regardless of size must comply with the disclosure obligation.
There is no established best practice yet on how to comply with the disclosure requirement. Given the aim is to indicate to third parties that the depicted person, voice, or event is not real, the disclosure should be as clear and unambiguous as possible. For instance, a visible label such as “AI-generated image” could be integrated into an image or video. In the case of audio content, it could be announced that the material was created by AI.
Importantly, the AI Act establishes a privilege for evidently artistic, creative, satirical, fictional or analogous content. Labelling may be done “in an appropriate manner” that “does not hamper the display or enjoyment of the work,” thereby balancing freedom of expression and artistic freedom on the one hand with the interest of disclosing deepfakes on the other hand. The rationale behind this is not only the importance of protecting freedom of expression and freedom of the arts and sciences, but also the assumption that (end) users will, in an artistic (or satirical etc.) context typically not assume they are being presented with 'real' images or sounds. In practice, this means that, for example, the notice could be shown only at the beginning or in the credit section of a video if showing it the whole time would disturb the artistic experience eg where deepfakes are used in video games or films.
In addition to the specific transparency requirements for deepfakes, providers of AI systems – broadly speaking the developers of the AI system – have to ensure that all synthetic AI-generated audio, image, video, or text content is clearly identified as artificially created or manipulated (Article 50(2) AI Act). While this applies to a much smaller group of addressees – i.e. only the providers – the obligation covers a larger amount of content. However, deepfakes are almost always AI-generated or manipulated content, so all deepfakes are covered by the provider obligation to 'watermark' AI-generated content. Providers have to include a machine-readable label, which can be in the form of watermarks, metadata tags, cryptographic methods, digital fingerprints or other appropriate techniques. Complying with this requirement will also help VLOPs under the DSA, as it enables them to quickly detect and filter out AI-generated content.
The law in action: challenges and the emerging enforcement landscape
The DSA's framework, while comprehensive, faces significant practical and legal tests. The first judicial interpretations are emerging, with litigants already bundling claims to challenge platforms' core functions. A significant class action filed in Germany against X, among others, directly alleges that the platform violates the DSA by "facilitating the spread of disinformation, deepfakes, and misleading content". The plaintiffs are demanding "effective measures against disinformation", mirroring the language of Article 35 and asking the court to assess the adequacy of the platforms' mitigation strategies.
This legal pressure is complemented by assertive regulatory oversight. Member States are also stepping up regulation. Notably, a legislative proposal in Germany aims to create a new criminal offence (§ 201b of the Criminal Code) specifically targeting the malicious creation of deepfakes.
It may take some time for before we know how successful the AI Act and DSA are in tackling the potential harms relating to deepfakes, as market practice and case law (not to mention the technology behind the deepfakes) develop.