24 May 2023
Radar - May 2023 – 3 of 3 Insights
The EU's approach to regulating AI is through top-down umbrella legislation. The European Commission proposed an AI Act in April 2021 as discussed here. The AI Act is intended to regulate the development and use of AI by providing a framework of requirements and obligations on its developers, deployers and users, together with regulatory oversight. The framework will be underpinned by a risk-categorisation for AI with 'high-risk' systems subject to the most stringent obligations, and a ban on 'unacceptable-use' systems.
Much of the subsequent debate around the draft AI Act has focused on the risk-categorisation system and definitions.
The European Parliament has provisionally agreed its negotiating position (likely to be formally adopted on 14 June 2023), which follows on from the Council adopting its position in December 2022.This means trilogues to arrive at the final version of the Act are likely to begin in early summer.
The Council's position
The Council of the European Union's proposed changes include:
The European Parliament's position
MEPs have suggested a number of potentially significant amendments to the Commission's proposal.
An amended list of banned 'unacceptable-risk' AI to include intrusive and discriminatory uses of AI systems such as:
Suggested changes would expand the scope of the high-risk areas to include harm to people's health and safety, fundamental rights, or the environment. High-risk systems will include AI systems used to influence voters in political campaigns and in social media recommender platforms (with more than 45m users under the DSA). High-risk obligations are more prescriptive, with a new requirement to carry out a fundamental rights assessment before use. However, the European Parliament's proposal also provides that an AI system which ostensibly falls within the high-risk category but which does not pose a significant risk can be notified to the relevant authority as being low-risk. The authority will have three months to object, during which time the AI system can be launched. Misclassifications will be subject to fines.
Enhanced measures for foundation and generative AI models
Providers of foundation model AIs would be required to guarantee protection of fundamental rights, health and safety, and the environment, democracy and rule of law. They would be subject to risk assessment and mitigation requirements, data governance provisions, and to obligations to comply with design, information and environmental requirements, as well as to register in the EU database.
Generative AI model providers would be subject to additional transparency requirements, including to disclose that content is generated by AI. Models would have to be designed to prevent them from generating illegal content and providers will need to publish summaries of copyrighted data used for training. They will also be subject to assessment by independent third parties.
MEPs propose additional rights for citizens to file complaints about AI systems and receive explanations of decisions reached by high-risk AI systems that significantly impact them.
See here for more on the European Parliament's position.
Anyone developing, deploying or using AI in the EU, placing AI systems on the EU market or putting them into service there, or whose systems produce output used in the EU, will be impacted by the AI Act and will be waiting for the outcome of the trilogues. The European Commission is hoping that the AI Act will be in force by the end of 2023, following which there will be a two-year implementation period.
24 May 2023
24 May 2023
24 May 2023
by Louise Popple and Debbie Heywood