2026年2月11日
Article Series – 1 / 16 观点
For the German version of this FAQ, please see here
When automotive manufacturers in the EU plan to use vehicle software and AI-based systems (e.g., for driver assistance or autonomous driving functions), they must comply with a variety of regulatory and other normative requirements. Corresponding obligations then also apply to service providers in the development project, for whom certain legal requirements that actually apply to OEMs become at least indirectly applicable.
This FAQ deals with the legal framework and the most important norms and standards for regulation and approval — with a specific focus on software and AI in vehicles.
The market access requirement for vehicles and their components in the European single market is type approval, which is granted by the relevant market surveillance authorities (in Germany, the Federal Motor Transport Authority, KBA).
As part of the type approval process, manufacturers must demonstrate that the vehicle (or component) meets the technical requirements. In addition to the general requirements of Regulation (EU) 2018/858, the requirements of Regulation (EU) 2019/2144 (General Safety Requirements) apply. Commission Implementing Regulation (EU) 2022/1426 contains special provisions for autonomous vehicles (e.g., SAE Level 4 functions) and corresponding automated driving systems (ADS). These regulations are supplemented by regulations at the UNECE level, e.g., UN Regulation No. 157 ("Automated Lane-Keeping Systems, ALKS"), which are included by reference in the scope of the European standards.
There are currently no specific regulations for the use of software-based AI in vehicles. The relevant aspects are covered by the above-mentioned regulations and further specified by standards and supplementary norms (see below).
With regard to AI-based vehicle functions, these regulations are supplemented by the EU Artificial Intelligence Act ("AI Act") as a cross-cutting framework: For "high-risk AI systems," especially when used in safety-critical vehicle functions, additional requirements such as data quality, transparency, traceability, and human oversight apply.
According to the regulatory framework of the AI Act, the provisions of the AI Act only apply indirectly to AI-based applications and systems that are themselves part of the type approval (see Art. 6 AI Act). The specific requirements for AI-based vehicle technologies will then be further specified by sector-specific legislation. No corresponding regulations are available as yet. However, as the provisions of the AI Act must be taken into account when drafting these regulations, the provisions of the AI Act form a basic or minimum standard that will also be relevant for the use of AI in the automotive environment in the future.
Further aspects of the use of software in the vehicle environment are regulated by supplementary regulations such as R156 (software updates) and R155 (vehicle cyber security) of the UNECE, which are included by reference in EU standards for type approval.
In addition, there are various regulations at the member state level that govern the approval and operating license for automated and autonomous driving. For SAE Level 3 and Level 4, the Road Traffic Act (StVG) in §§ 1a ff. StVG contains specifications for the approval of motor vehicles with highly or fully automated driving functions (§§ 1a-c StVG) and motor vehicles with autonomous driving functions in specified operating areas (Sections 1d-j StVG in conjunction with the Autonomous Vehicle Approval and Operation Regulation (AFGBV)). An overview of the current legal framework in Germany and the requirements that such vehicles must meet can be found at here.
Manufacturers – and indirectly their suppliers – must always demonstrate, in accordance with all the aforementioned regulations, that software/hardware/AI systems have been developed in accordance with the "state of the art," meet safety and functionality requirements, and have been sufficiently validated/verified.
Norms and standards play a central role here. They define the "state of the art" and thus flow into the approval process, at least indirectly. Authorities and testing organizations use these standards as a reference when evaluating safety concepts.
Below is an overview of the most important standards relating to software, E/E systems, and AI in vehicles:
ISO 26262 "Road vehicles - Functional safety"
ISO 21448 "Safety of the Intended Functionality (SOTIF)"
ISO/PAS 8800 "Road Vehicles – Safety and Artificial Intelligence"
Other relevant standards/norms
Developers of safety-related software for automated driving functions must ensure, in accordance with the relevant automotive standards, that systems are developed in a traceable manner, are safely controlled, and are technically secure. The requirements are primarily derived from ISO 26262 (functional safety), ISO 21448 (SOTIF), ISO/SAE 21434 (cybersecurity), UNECE R155/R156, and process standards such as ASPICE, and can be summarized as follows:
✔️ Structured and traceable development
Requirements must be fully documented and traceable throughout the entire development process (requirement → architecture → code → test). System boundaries, interfaces, and the operational design domain (ODD) must be clearly defined.
✔️ Implementation of functional safety (ISO 26262)
Software must be designed to be fault-tolerant and able to respond safely. This includes plausibility checks, monitoring mechanisms, deterministic behavior, and support for safety analyses (e.g., FMEA/FTA). Safety mechanisms must be implemented in accordance with ASIL.
✔️ Maintaining and ensuring perception and AI limits (SOTIF)
Even without technical defects, the system must not generate unacceptable risks. Developers must identify uncertainties, take sensor limitations into account, and implement safe degradation strategies (minimal risk condition). Edge cases must be addressed systematically.
✔️ Specific requirements for AI/ML
Training data, model versions, and training processes must be documented and reproducible. Systems require robustness tests, coverage analyses, and mechanisms for detecting unusual or unknown situations (OOD detection).
✔️ Cybersecurity by Design (ISO/SAE 21434, R155)
Software must be developed according to secure coding principles. Threat analyses, interface security, secure communication, and protected update mechanisms are mandatory.
✔️ Updateability (R156)
Systems must be updatable, versioned, and resettable. Changes must not compromise security and require re-verification.
✔️ Comprehensive verification and validation
In addition to classic tests, simulation- and scenario-based validations are required, especially for AI functions. Every change requires regression testing.
✔️ Verification
All work must enable the creation of a safety and security case. Changes, tools, and software versions must be documented in a controlled manner.
In short, developers must ensure that their software is predictable, fail-safe, protected against attacks, testable, updatable, and controlled in terms of AI uncertainties—and that this can be proven at any time. Suppliers must take these requirements into account in the development process so that the client (e.g., an OEM) can fulfill its obligations as the distributor of the relevant technologies.
Yes, possibly. Whether an existing approval is affected depends on whether the change affects approval-relevant characteristics of the approved vehicle type.
Legal framework
A reassessment of the approval is required on a regular basis if changes affect, for example:
Such changes may trigger a revision or extension of the type approval.
Retrofitting in legacy vehicles (e.g., new assistance or connectivity functions) can create new interfaces, data flows, and cyber risks. This sometimes also changes the original approval basis. Such measures must therefore be treated as approval-relevant changes and secured by regulations.
From the perspective of the legal/compliance department of an automobile manufacturer or developer of AI-based vehicle technologies, the following steps are recommended:
✔️ Early involvement: Ensure that software/AI functions comply with the relevant standards and approval requirements as early as the concept and architecture phase, if necessary by providing "cookbooks" and other aids for development teams.
✔️ Standards mapping: A systematic overview of which standard(s) apply to which function (E/E system, ADAS, AI) – e.g., ISO 26262 for hardware/software malfunctions, ISO 21448 for system boundaries, ISO/PAS 8800 for AI.
✔️ Clarify requirements with clients: Clarify early on with clients in development projects which standards need to be taken into account in order to enable the client to later prove compliance with the standards.
✔️ Documentation & verification: Document development evidence, verification and validation reports, scenario tests, data quality, and changes (OTA).
✔️ Monitoring & lifecycle management: Not only check software updates and AI models before market launch, but also monitor them after they have been placed on the market, analyze malfunctions, control recalls or updates if necessary, and implement appropriate processes and monitoring.
✔️ Interfaces to approval & product liability: Coordination with type approval/conformity testing and product liability law (e.g., recall obligations, freedom to make changes, liability risks).
✔️ Data protection & data security: Implement "by design" in the development process.
✔️ Training & awareness: Development, control, and management teams must be familiar with the specifics of vehicle software/AI (edge cases, sensor technology, environmental conditions) and their legal/regulatory implications.
作者 Thomas Kahl 以及 Nils von Reith
作者 Thomas Kahl 以及 Teresa Kirschner, LL.M. (Information and Media Law)
作者 Thomas Kahl 以及 Teresa Kirschner, LL.M. (Information and Media Law)