What's the issue?
The EU's approach to regulating AI is through top-down umbrella legislation. The European Commission proposed an AI Act in April 2021 as discussed here. The AI Act is intended to regulate the development and use of AI by providing a framework of requirements and obligations on its developers, deployers and users, together with regulatory oversight. The framework will be underpinned by a risk-categorisation for AI with 'high-risk' systems subject to the most stringent obligations, and a ban on 'unacceptable-use' systems.
Much of the subsequent debate around the draft AI Act has focused on the risk-categorisation system and definitions.
What's the development?
The European Parliament has provisionally agreed its negotiating position (likely to be formally adopted on 14 June 2023), which follows on from the Council adopting its position in December 2022.This means trilogues to arrive at the final version of the Act are likely to begin in early summer.
The Council's position
The Council of the European Union's proposed changes include:
- a narrower definition of AI systems to cover systems developed through machine learning approaches and logic, and knowledge-based approaches
- private sector use of AI for social scoring is prohibited as are AI systems which exploit the vulnerabilities, not only for a specific group of persons, but also persons who are vulnerable due to their social or economic situation
- clarification of when real-time biometric identification systems can be used by law enforcement
- clarification of the requirements for high-risk AI systems and the allocation of responsibility in the supply chain
- new provisions relating to general purpose of AI and where that is integrated into another high-risk system
- clarification of exclusions applying to national security, defence and the military as well as where AI systems are used for the sole purpose of research and development or for non-professional purposes
- simplification of the compliance framework
- more proportionate penalties for non-compliance for start-ups and SMEs
- increased emphasis on transparency, including a requirement to inform people exposed to emotion recognition systems
- measures to support innovation.
The European Parliament's position
MEPs have suggested a number of potentially significant amendments to the Commission's proposal.
Unacceptable-risk AI
An amended list of banned 'unacceptable-risk' AI to include intrusive and discriminatory uses of AI systems such as:
- real-time remote biometric identification systems in publicly accessible spaces
- post remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorisation
- biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation)
- predictive policing systems (based on profiling, location or past criminal behaviour)
- emotion recognition systems in law enforcement, border management, workplace, and educational institutions
- indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy).
High-risk AI
Suggested changes would expand the scope of the high-risk areas to include harm to people's health and safety, fundamental rights, or the environment. High-risk systems will include AI systems used to influence voters in political campaigns and in social media recommender platforms (with more than 45m users under the DSA). High-risk obligations are more prescriptive, with a new requirement to carry out a fundamental rights assessment before use. However, the European Parliament's proposal also provides that an AI system which ostensibly falls within the high-risk category but which does not pose a significant risk can be notified to the relevant authority as being low-risk. The authority will have three months to object, during which time the AI system can be launched. Misclassifications will be subject to fines.
Enhanced measures for foundation and generative AI models
Providers of foundation model AIs would be required to guarantee protection of fundamental rights, health and safety, and the environment, democracy and rule of law. They would be subject to risk assessment and mitigation requirements, data governance provisions, and to obligations to comply with design, information and environmental requirements, as well as to register in the EU database.
Generative AI model providers would be subject to additional transparency requirements, including to disclose that content is generated by AI. Models would have to be designed to prevent them from generating illegal content and providers will need to publish summaries of copyrighted data used for training. They will also be subject to assessment by independent third parties.
Additional rights
MEPs propose additional rights for citizens to file complaints about AI systems and receive explanations of decisions reached by high-risk AI systems that significantly impact them.
See here for more on the European Parliament's position.
What does this mean for you?
Anyone developing, deploying or using AI in the EU, placing AI systems on the EU market or putting them into service there, or whose systems produce output used in the EU, will be impacted by the AI Act and will be waiting for the outcome of the trilogues. The European Commission is hoping that the AI Act will be in force by the end of 2023, following which there will be a two-year implementation period.
Find out more
- You can use our Digital Legislation Tracker to keep on top of incoming digital legislation, including the AI Act. There is also a page dedicated to the AI Act here.
- For a deep-dive into the AI Act as originally proposed, see our Interface edition here.
- For more on AI and regulatory approaches around the world, see here.