AI is not only driving advances in technologies that make automated and assisted driving more comfortable, safe, and sustainable, it also introduces a range of risks that could impact the rights and freedoms of individuals, including significant safety concerns when being applied on the road.
The EU AI Act aims to address these risks. While the Act establishes sector-neutral regulations for AI use, future developments will introduce industry-specific guidelines tailored for the automotive sector.
The following article provides an initial overview of this emerging regulatory framework, outlining how developers and users of automotive AI may need to adapt. In summary, the AI Act and forthcoming automotive-specific regulations will establish new compliance requirements, marking a transformative shift in AI oversight across the industry.
Current legal framework
Autonomous and automated vehicles (AVs) in the EU are subject to a range of regulatory frameworks. Some, such as the Type-Approval Framework Regulation (TAFR) - encompassing Regulation (EU) 2018/858 and Regulation (EU) 2019/2144 - focus specifically on automotive standards, including vehicle approval and market surveillance. These are further supported by international standards, such as those from the United Nations Economic Commission for Europe (UNECE), which are integrated into EU law.
Under TAFR, vehicles must complete a type-approval process to ensure compliance before they are marketed. Additional regulations, like the General Product Safety Regulation (GPSR), address broader product safety concerns.
Together, these regulations ensure that vehicles meet safety, environmental, and technical standards across the EU before market entry. While they cover traditional safety issues, including aspects of AI use, they do not currently explicitly address AI-specific risks.
AI use in the industry is, of course, also subject to rules which are not specific to AVs but may also be engaged. For example, when AI processes personal data from connected vehicles, the GDPR will apply and copyright laws, which regulate AI’s use of copyrighted content without directly addressing AI may also be relevant.
In short, multiple existing legal frameworks already regulate AI use in automotive products and services. However, there is currently no regulation exclusively focused on AI. This is about to change.
The AI Act’s risk-based approach
The EU AI Act came into force on 1 August 2024 and will be phased in over a three year period. It introduces a risk based approach to regulating AI.
AI systems posing unacceptable risks - such as government social scoring or certain types of real-time surveillance - are prohibited.
Chapter III addresses high-risk AI systems (HRAI), which could have serious adverse effects if they fail. These systems must meet stringent requirements for data security, transparency, human oversight, and robustness, and undergo a mandatory conformity assessment before market entry, with continuous monitoring throughout their lifecycle. Applications in critical infrastructures, such as energy supply, education systems, employment and justice serve as prime examples.
Certain types of AI require transparency measures even though they are not designated as high-risk, for example, users must be notified when interacting with an AI system, such as with chatbots or other interactive applications.
AI systems posing very low risks to individuals or society are governed by general guidelines and best practices, without additional legal requirements.
AI in the automotive industry
AI is increasingly integral to products and services in the automotive industry. In autonomous and driver-assistance systems, AI enhances vehicle intelligence, safety, and efficiency by analysing data from diverse driving scenarios to make informed driving decisions, learn from road conditions, and monitor driver fitness and alertness. Additionally, AI supports non-safety-related functions, such as media, access, and payment services within vehicles.
AI’s role in controlling vehicle functionality may significantly impact individual rights and freedoms. System failures could lead to accidents, property damage, or personal injury.
Special rules for automotive AI
At first glance, most AI-driven autonomous and assistance systems are expected to be classified as HRAI, given the critical nature of in-vehicle (aka 'on the road') decision-making. Other AI applications, like entertainment or convenience features, are more likely to fall under LRAI classification.
To align with industry-specific requirements and harmonised EU safety standards, the AI Act classifies certain AI systems as HRAI if they are governed by other specific harmonisation legislation (Article 2(2)), such as TAFR and GPSR, which mandate prior conformity assessments and approval. In these cases, the AI Act won't apply directly.
This approach ensures automotive-specific AI use will remain primarily regulated by sectoral legislation, while HRAI provisions in the AI Act will serve as supplementary – albeit overarching - standards. The AI Act mandates the EU legislator to adapt Union harmonisation laws to align with HRAI standards in the TAFR and GPSR (Article 80 ff). These rules will integrate HRAI requirements tailored to automotive needs, potentially including “regulatory sandboxes” for innovation.
Non-regulated AI systems which are not subject to such harmonised EU legislation must continue to be evaluated based on the rules of the AI Act and undergo standard risk assessments.
Future compliance requirements for AI use cases in the industry
Although the specifics of the revised AI-related TAFR and GPSR regulations are still pending, with no drafts yet available, the HRAI and LRAI provisions in the AI Act provide an early indication of the regulatory demands the industry will face.
Under the AI Act’s high-risk classification, businesses must adopt extensive documentation, monitoring, and risk mitigation procedures in AI development. These include rigorous testing to identify and eliminate potential biases, transparency logs for decision traceability, and comprehensive quality assurance throughout the vehicle’s lifecycle. Additionally, dedicated teams including external auditors will be needed to conduct risk assessments and ensure ongoing compliance, making the Act’s requirements technically complex, costly and resource-intensive.
While forthcoming sector-specific updates may clarify compliance obligations for automotive stakeholders, it is uncertain to what extent these regulations will fully align with the EU AI Act, potentially affecting the entire supply chain.
Automobile manufacturers cannot immediately implement the AI Act requirements as they stand but may face new compliance obligations tied to revised TAFR standards as and when they become available. Balancing the specific demands of revised TAFR for regulated products with the AI Act’s general AI requirements will add complexity to AI regulation adherence.
Suppliers face a similar challenge. Like manufacturers, they are pressed for time and, while waiting for revised TAFR details, must prepare AI offerings to meet anticipated sector-specific standards under new regulations in the near future.
Impact on non-EU companies
The AI Act broadly applies to AI systems introduced or used in the EU or EEA, including general-purpose models, regardless of the provider’s location. This reach extends to non-EU/EEA operators whose AI outputs, such as predictions or recommendations, are used within these regions. As a result, non-EU companies—such as those based in the U.S., China, South Korea, or Japan—must comply with the AI Act if they market AI systems or use AI-generated outputs in the EU.
This extraterritorial scope requires any automotive manufacturer or service provider entering the EU market to address overlapping regulatory requirements. For example, a US-based company may need to comply with the EU’s stringent high-risk AI requirements while also adhering to differing or less stringent regulations in its home country. The same applies to, for example, Chinese manufacturers entering the EU market with products primarily designed to meet their national standards.
Many businesses may choose to adopt the EU’s high standards globally, potentially leading to a gradual harmonisation of AI safety standards worldwide if other major markets align with EU principles. While this fosters regulatory consistency, it also increases costs and development timelines.
Outlook
In essence, the EU AI Act redefines the automotive industry’s approach to AI, emphasising ethics, transparency, and safety in automated decision-making while also influencing global standards through its extensive reach. The practical impact on non-EU companies and international regulatory goals remains to be seen.
Though the impending revisions to the EU’s legal framework are not yet fully clear, businesses should familiarise themselves with the AI Act’s requirements. With an ambitious implementation timeline, early preparation is essential to navigate and comply with these new regulatory demands.