Article 18 of the newly adopted European Media Freedom Act (EMFA), which is due to enter into force on 8 August 2025, introduces safeguards intended to protect media content, including a 24-hour notice requirement before very large online platforms (VLOPs) can restrict content from self-declared media providers in accordance with the Digital Services Act (DSA). However, this procedural protection is conditional: it depends on the provider declaring whether and how it uses artificial intelligence in the creation of content. This creates a point of intersection between the EMFA, the DSA and the EU’s AI Act and raises questions about liability, responsibility and the scope of fundamental rights protection.
Special protection of media service provider content under the EMFA
Content provided by a media service provider that has submitted a declaration pursuant to Article 18(1) EMFA enjoys enhanced protection. The declaration must assert that they are media service providers editorially independent from Member States, political parties, third countries and entities controlled or financed by third countries, and that they do not provide content generated by AI systems without subjecting it to human review or editorial control.
If a VLOP intends to suspend or restrict the visibility of such content on the grounds that such content infringes its terms and conditions (including for example pursuant to a takedown notice under Article 16 DSA), the VLOP first has to: (a) submit to the media service provider concerned a statement of reasons in line with the Platform-to-Business-Regulation ((EU) 2019/1150) and the DSA, and (b) give the media service provider the opportunity to reply to the statement of reasons within 24 hours of receipt (or shorter in the case of a crisis under the DSA). If a VLOP then decides to suspend or restrict visibility, it needs to inform the media service provider without undue delay. Exceptions apply to obligations regarding protection of minors or other illegal content.
AI-generated content and the EU’s legal framework
A key element of the legal challenge lies in how the EU regulates AI-generated content. Under the Union’s framework, content produced solely by AI does not benefit from the fundamental right to freedom of expression. According to the European Commission, fundamental rights – including freedom of expression – are inherently human. In response to a formal inquiry, the Commission clarified that “human rights are inherent to all human beings”, and that “it is always the individuals who may avail themselves of free expression rights and their protection”. Consequently, “automatically generated and published content does not in itself enjoy any protection in this respect”.
From a legal standpoint, AI-generated output is treated not as 'speech' but as a 'product' or a 'service'. This classification shifts the legal focus from fundamental rights law to regulations governing market safety, consumer protection, and product liability.
For this AI-assisted content to be covered by the EMFA’s procedural protections and potentially benefit from freedom of expression safeguards, a human must take meaningful responsibility for the content. This is typically done through either 'human review' or 'editorial control', both of which imply varying levels of oversight and accountability.
Distinguishing between 'human review' and 'editorial control'
The distinction between human review and editorial control is important, as it may determine whether content qualifies for the exemptions and protections under both the EMFA and the AI Act. Although both terms refer to human involvement, they imply different degrees of intervention:
Nature and scope of oversight
- Human review generally refers to a limited compliance-oriented process. This is laid out in Articles 6(3)(c) (for high-risk systems) and 50(4) AI Act, as well as Recitals 53 and 134. But the principle can be applied here, too: in journalism it involves verifying AI-generated output against factual or legal benchmarks (eg copyright compliance, avoidance of defamation) and making a binary determination (eg approval or rejection).
- Editorial control, by contrast, implies a more substantive engagement with the content - a reference may be made to the concept of 'editorial responsibility' under the EU’s AVMS Directive.
Legal accountability
- In case of human review, responsibility often remains internal and procedural, tied to compliance roles or risk mitigation processes.
- With editorial control, a person or entity assumes editorial (and thus legal) responsibility for the published content. This can have implications for liability and is relevant to establishing eligibility for legal protections, such as those under the EMFA.
Legal intersections: Article 18 EMFA and the AI Act
This distinction is important to the interplay between Article 18 of the EMFA and Article 50(4) of the AI Act. The latter includes an exception to its transparency requirements, stating that disclosure of AI-generated content which is published for the purpose of informing the public on matters of public interest (which would typically include journalistic content) is not required if the material has been subject to 'human review or editorial control' and where a person assumes editorial responsibility. Thus, when a media provider makes a declaration under Article 18(1) EMFA that it exercises such oversight, it is effectively at the same time claiming this exemption. However, this raises several interpretative questions, such as what constitutes a sufficient standard of review, and what form the review or control must take, eg whether public attribution or authorship is necessary. These questions remain open and are likely to be clarified through further regulatory guidance or case law.
Practical implications: EMFA vs DSA framework
The procedural protections of the EMFA - particularly the 24-hour notice period under Article 18(4) - are conditional on the self-declaration under Article 18(1) EMFA. If no such declaration is made (eg where content is generated by AI and published without human editorial control), platforms may argue that the content does not qualify for EMFA-specific safeguards.
In such cases, content moderation would instead remain regulated by the content moderation provisions of the DSA only, lowering the threshold for content removal or restriction by platforms.
Operational considerations for media organisations
To navigate this evolving legal environment, media organisations should consider implementing internal processes to categorise AI-assisted content based on the level of human oversight. This could involve: applying human review for basic compliance and factual verification, and/or exercising editorial control where the organisation seeks to claim full legal responsibility, but also enjoying EMFA protection.
The need for legal clarity
As the use of generative AI in journalism continues to expand, the threshold between unprotected AI 'product' and protected human 'speech' becomes increasingly important. The legal frameworks provided by the EMFA, the AI Act, and the DSA create a structured but still-evolving landscape. Similar considerations are relevant in the area of copyright, where insufficient human involvement may result in a loss of copyright protection.
Ultimately, a central question remains: how much human involvement is required to confer legal protection on AI-assisted content? Clarifying this threshold - whether through guidance, judicial interpretation, or industry standards - will be essential for ensuring legal certainty and maintaining media freedom in an AI-augmented publishing environment.