1 October 2025
Radar - October 2025 – 2 of 3 Insights
The European Commission has published draft guidance and a reporting template for serious incidents caused by high-risk AI systems. These documents, open for public consultation until 7 November 2025, provide companies with hints for implementing one of the law's core compliance requirements in practice.
The obligation to report serious incidents, anchored in Article 73 of the AI Act, is far more than a bureaucratic exercise. It forms the core of the EU's post-market surveillance system for artificial intelligence. The Commission's guidance outlines four clear objectives pursued through this obligation:
Furthermore, the EU aims for alignment with international initiatives, such as the OECD's AI Incidents Monitor, to promote a globally coherent approach to AI safety.
Previously, the term "serious incident" in the legal text was abstract. The new guidance provides crucial interpretative work here, bringing the definition to life. A reportable incident occurs when a malfunction of an AI system directly or indirectly leads to one of the following four outcomes:
Death or serious harm to health: The guidance specifies "serious harm to a person's health" as life-threatening illness, temporary or permanent impairment of a body structure or function, conditions necessitating or prolonging hospitalisation, or medical intervention to prevent such outcomes.
Serious and irreversible disruption of critical infrastructure: Here, the guidance provides a two-tiered definition. A disruption is "serious" if it results in an imminent threat to life or the physical safety of a person. It is considered "irreversible" if, for instance, physical infrastructure needs to be rebuilt, essential data (such as patient records) cannot be restored, or specialised equipment that cannot be quickly replaced is destroyed.
Infringement of fundamental rights: This is one of the most innovative yet challenging parts of the reporting obligation. The guidance clarifies that not every breach is reportable. It must be an infringement that significantly interferes with Charter-protected rights on a large scale. This high threshold is intended to prevent a flood of reports. Examples provided include:
Serious harm to property or the environment: The "seriousness" of the harm is assessed based on factors such as the economic impact, the cultural significance of the damaged property, and the permanence of the damage. For environmental harm, reference is made to existing EU directives, which presume irreversible or long-lasting damage to ecosystems.
The primary responsibility lies clearly with the providers of high-risk AI systems. As soon as they become aware of a potentially serious incident and establish a causal link to their AI system (or see a reasonable likelihood thereof), strict deadlines begin to run:
To meet these deadlines, the AI Act explicitly permits the submission of an incomplete initial report, to be followed by a complete report later.
After reporting, providers are obliged to immediately launch a thorough investigation, conduct a risk assessment, and take corrective action. A crucial point here is that the AI system concerned must not be altered in a way that could affect the subsequent evaluation of the causes before the authorities have been informed.
However, deployers also have a duty. If they identify a serious incident, they must inform the provider "immediately". The guidance pragmatically interprets this as within 24 hours.
One of the biggest concerns for businesses was the risk of duplicate reporting obligations. The guidance provides much-needed clarity on this front. For high-risk AI systems in sectors that already have their own equivalent reporting obligations—such as financial services (DORA), critical infrastructure (NIS2, CER), or medical devices (MDR)—a simplified procedure applies: the reporting obligation under the AI Act is triggered only for infringements of fundamental rights. All other incidents (e.g., harm to health) are to be reported exclusively under the respective sectoral laws. This arrangement prevents redundant bureaucratic processes and ensures legal certainty.
The standardised reporting template translates the requirements into a clear, structured format. It is divided into five main sections:
This template is supposed to ensure that market surveillance authorities across the EU receive consistent and comparable data.
With the publication of these drafts, the AI Act becomes tangible. Companies developing or deploying high-risk AI systems now have a solid basis on which to prepare their internal processes for incident management and reporting. The public consultation offers an important opportunity for all stakeholders to provide feedback and ensure that the final guidance is practical and effective. The deadline for submitting comments is 7 November 2025.
22 October 2025
1 October 2025