Debbie Heywood

Senior Counsel – Knowledge

Read More

Debbie Heywood

Senior Counsel – Knowledge

Read More

18 January 2024

Radar - January 2024 – 1 of 2 Insights

EU caps off 2023 with progress on flagship regulation

  • In-depth analysis

Progress on the AI Act, the Data Act, the revised Product Liability Directive, and the Cybersecurity Regulation.

What's the issue?

With European Parliament elections coming up in June 2024, the current European Commission is under pressure to progress its legislative programme before the end of its term. While the elections do not necessitate a change in Commissioners, there is usually considerable turnover following EP elections. A significant change to the constituents of the Parliament could impact legislation in progress as well as the makeup of the Commission itself.

What's the development? 

Following political agreement of the the Cyber Resilience Act and the AI Act on 1 and 9 December respectively, the EU went on to agree the revised Product Liability Directive, the Cybersecurity Regulation, and to publish the Data Act in the Official Journal before the end of 2023.

AI Act

Provisional political agreement was reached on the AI Act (AIA) on 8 December 2023 after lengthy and intense negotiations between the European Parliament, the Council of the EU and the European Commission.

The legislation continues to take a tiered approach with a ban on certain types of 'unacceptable risk' AI, and the most onerous provisions relating to high-risk systems. The compromise text has not yet been completed and without seeing the final drafting, we do not have full details. The current expectation is that final text will be published by the end of Q1 this year. In the meantime, press releases (EP, Council), and the press conference highlight the following.

Definition of AI

This follows the OECD definition to ensure international consistency.

Scope and exemptions

The AIA will not apply to areas outside the scope of EU law and will not apply to systems used exclusively for military or defence purposes, to AI systems used solely for research and innovation, nor to people using AI for non-professional purposes. Protections have been included to prevent misuse for national security purposes. Only completed systems and models will be covered - systems under development will not be in scope. It is unclear whether existing systems will be impacted. Open source and free models are excluded from the majority of the Act but may still be subject to bans (where relevant), and are subject to transparency obligations around training data, as well as a requirement to comply with copyright law.

Banned AI

The following will be banned:

  • Biometric categorisation systems to infer sensitive data or which use sensitive data.
  • Untargeted scaping of the internet and CCTV systems to create facial recognition databases.
  • Emotional recognition in the workplace and educational institutions.
  • Social scoring based on social behaviour or personal characteristics.
  • AI systems which manipulate human behaviour to circumvent free will or which exploit people's vulnerabilities.
  • Predictive policing for individuals (in some cases).

Law enforcement exemptions

Safeguards have been included to protect fundamental rights. Biometric identification systems (RBI) in public spaces for law enforcement purposes will be permitted subject to prior judicial authorisation and for a strictly limited list of defined use cases. Post-remote RBI will be permitted in the targeted search of a person convicted or suspected of having committed a serious crime. Real-time RBI use will be subject to strict limitations including time and location, for the purposes of targeted searches of victims of certain crimes, prevention of a specific and present terrorist threat, or to locate or identify a person suspected of having committed stipulated serious crimes (including terrorism, trafficking, murder, and environmental crime). An emergency procedure has been included to allow law enforcement agencies to deploy a high-risk AI tool which has not passed a conformity assessment in case of urgency.

High-risk systems

Systems which pose a risk of significant harm to health, safety, fundamental rights, the environment, democracy (including systems used to influence the outcome of elections), and the rule of law, will be classified as high-risk. A mandatory fundamental rights assessment will be required before high-risk systems are put on the market. Deployers also have increased transparency requirements and public bodies which use them will be required to register their use in an EU database. Individuals will have the right to make complaints about AI systems and receive explanations about decisions made by high-risk systems which impact their rights and must be informed when they are exposed to an emotion recognition system.

General-purpose AI systems (GPAI) and foundation models

New provisions have been added to cover GPAI (AI systems which can be used for many different purposes) and for the foundation models they are based on. Transparency requirements will apply. These include drawing up technical documentation, complying with EU copyright law, and publishing detailed summaries of the content used to train the models. High-impact GPAI models (foundation models trained with large amounts of data and with advanced complexity, capabilities and performance which can disseminate systemic risks) will be subject to additional requirements including risk evaluation and mitigation obligations. They will also be subject to adversarial testing requirements, cyber security rules, incident reporting requirements, and will be required to report on their energy efficiency. Until such time as harmonised standards are published, codes of practice can be relied upon.

Limited-risk systems

These will be subject to light transparency obligations, for example, a requirement to disclose that content is AI-generated.

Governance and oversight

An AI Office will be created within the Commission to oversee the most advanced AI models, contribute to fostering standards and testing practices, and enforce EU-wide rules. The Office will be advised by a scientific panel of independent experts, in particular about GPAI systems, foundation models, and material safety risks. The AI Board, comprised of Member State representatives, will provide a coordination function and act as an advisory body to the Commission. Member States will be given a role in managing implementation, including the design of codes of practice for foundation models. An advisory forum of stakeholders including industry representatives, SMEs, civil society and academia, will be set up to provide technical expertise to the Board.


Non-compliance can result in fines of up to the higher of:

  • €35 million or 7% of annual global turnover for violations relating to banned AI applications
  • €15 million or 3% for violations of AI Act obligations (more proportionate caps are set for SMEs and start-ups)
  • €7.5 million or 1% for supplying incorrect information.

Complaints may be made by individuals and entities to the relevant market surveillance authority concerning non-compliance.

Measures to support innovation and SMEs

Clarifications have been added around allocation of responsibilities in the supply chain, and, in particular, between providers and users of AI systems. Clarifications on the interaction with other existing laws have also been made. Measures in support of innovation and around AI regulatory sandboxes and real-world testing have been included, as well as measures to support innovation and SMEs.

Next steps and timeline

The AI Office will be set up immediately. Work will now continue to finalise details and a compromise text will be submitted to Member State representatives (Coreper) for endorsement. The text will need to be confirmed by the European Parliament and Council and will then be subject to legal-linguistic revision before progressing to the final stages of adoption. The legislation is expected to come into force in Spring 2024. There will be a two-year implementation period, however banning provisions are expected to come in within six months, and transparency and governance rules to apply after twelve months.

Pending agreement of the final text, the European Commission has published a set of Q&As. This outlines some of the highlights of the AI Act including its application, various risk categorisations and associated duties, the way the AI Act will be administered and enforced, and fundamental rights. The EC also announced an AI Pact – a voluntary scheme to foster early implementation of measures in the AI Act.

See here for more on the AI Act and AI generally.

Data Act

The EU's Data Act was published in the Official Journal on 22 December 2023 and will come into force on 11 January 2024. It will apply broadly from 12 September 2025. The Data Act is intended to remove barriers to data sharing, give businesses access to data they contribute to creating, and individuals more control over all their data (not just personal data). It will empower users of connected devices to access and share data they generate with third parties as well as switch cloud and edge service providers.

It also aims to protect SMEs by providing a harmonised framework in which data can be shared, equalising access to data across the single market. Measures prevent abuse of contractual imbalances in data sharing contracts and guidance is given on reasonable compensation of businesses for making data available.

In exceptional circumstances (for example in cases of public emergency), public sector bodies, the Commission, the European Central Bank and EU bodies can access private sector data.

Provisions are included to protect trade secrets and intellectual property rights and guidance is given on reasonable compensation for making data available under specified circumstances.

While the Data Act will mostly apply from 12 September 2025, some elements will be introduced on other dates:

  • Article 3(1) (requirement to design and manufacture connected products to make data and metadata easily accessible and machine-readable) will apply to connected products and related services which are placed on the market after 12 September 2026.
  • Chapter III (obligations for data holders obliged to make data available pursuant to Union Law) will apply to obligations to make data available under Union law or national legislation adopted in accordance with Union law, which enters into force after 12 September 2025.
  • Chapter IV (unfair contractual terms related to data access and use between enterprises) will apply to contracts concluded after 12 September 2025.
  • Chapter IV will apply from 12 September 2027 to contracts concluded on or before 12 September 2025, provided they are of indefinite duration or due to expire at least 10 years from 11 January 2024.

Read more about the Data Act here.

Cyber Resilience Act

The European Parliament and Council reached agreement on the Cyber Resilience Act on 1 December 2023. This will introduce mandatory cyber security requirements for all hardware and software throughout the product lifecycle, taking a risk-based approach. Manufacturers will be required to implement security by design and provide support and updates to consumers for a period of time related to the anticipated lifespan of the product. They will also be subject to transparency and incident reporting requirements. The CRA will now be formally adopted and will enter into force 20 days after publication in the Official Journal. Manufacturers, importers and distributors of hardware and software products will then have 36 months to prepare for full implementation and 21 months in relation to incident and vulnerability reporting obligations.

Revised Product Liability Directive (PLD)

Provisional political agreement was reached on the EU's revised Product Liability Directive on 14 December 2023. The new Directive will replace the current PLD and will introduce changes designed to cover the online shopping environment and new technologies. Key features include:

  • The revised PLD will extend the definition of "product" to cover digital manufacturing files and software (but will not include free and open source software developed or supplied outside the course of a commercial activity).
  • Online platforms will be potentially liable for a defective product if they present or otherwise enable the transaction for its sale in a way which would lead an average consumer to believe the product is provided by the online platform or by a trader acting under the platform's authority or control. Where a product is substantially modified outside the manufacturer's control and put back into the market, the maker of the modification will be held liable as manufacturer of the modified product.
  • Damages can be claimed for death or personal injury including damage to psychological health, damage to property and destruction or irreversible corruption of data. This can be claimed for material and non-material damages subject to national law provisions.
  • Where a defective product or component is imported from outside the EU, the importer, the authorised representative of the manufacturer or, as a last resort, the fulfilment service provider, can be held liable.
  • Where it is excessively difficult for a consumer to prove the defectiveness of a product or the causal link between a defect and damage (particularly due to technical or scientific complexity), a court may decide that the claimant only has to prove the likelihood that the product was defective or of a causal link to damage.
  • In exceptional cases, there can be an extended liability period of up to 25 years.

The agreed text must now be endorsed by Member State representatives within the Council (Coreper) and then go through formal adoption by the Council and European Parliament.

EU Cybersecurity Regulation set to be published in the Official Journal

The Cybersecurity Regulation which sets out common cyber security standards at the institutions, bodies, offices and agencies of the EU, has passed its final hurdle following its adoption by the Council of the EU, and will now be published in the Official Journal. The Regulation puts in place a governance and risk management framework and extends the remit of the Computer Emergency Response Team for the EU institutions. It creates a minimum set of information security rules and standards.

What does this mean for you?

As ever, there is a lot going on in the tech and data space. Here we've just looked at some key EU legislation agreed or passed at the end of the year but of course, there is more to think about. 2024 will be busy for those caught by incoming legislation both in the EU and the UK.

In this series

Technology, media & communications

EU caps off 2023 with progress on flagship regulation

18 January 2024

by Debbie Heywood

Technology, media & communications

Getting ready for the UK's OSA and the EU's DSA

18 January 2024

by Debbie Heywood

Call To Action Arrow Image

Latest insights in your inbox

Subscribe to newsletters on topics relevant to you.


Related Insights

Technology, media & communications

Is UK AI regulation on the way in after all?

25 April 2024

by Debbie Heywood

Click here to find out more
Technology, media & communications

Ofcom launches phase three of its online safety regulation plan

25 April 2024
In-depth analysis

by Debbie Heywood

Click here to find out more
Technology, media & communications

DMCC Bill passes its third reading in the Lords and returns to the Commons

25 April 2024

by Debbie Heywood and Louise Popple

Click here to find out more