24 May 2021
Radar - May 2021 – 2 of 3 Insights
There are few emerging technologies as revolutionary as AI which presents tremendous opportunities but poses potentially dystopian risks. Regulating its development and use to ensure ethical standards are maintained, without stifling innovation, is a huge challenge.
The EC has published:
The draft AI Regulation sets out proposals for:
The Regulation takes a risk-based approach to AI systems. Some types of AI as set out in Title II, are considered to carry unacceptable risk and are prohibited. High-risk AI systems, their providers, importers, distributors, and in some case users, are subject to a range of obligations. Limited risk systems are subject to transparency requirements, and minimal risk systems are subject to existing legislation but sectors may develop or adhere to codes of conduct to help foster trust.
The draft has been opened for public consultation and now goes to the Council and Parliament for discussion so it has a long way to go before it is finalised and may well be subject to considerable changes. Once adopted, Member States will have two years to apply the majority of the Regulation although some provisions will apply sooner.
Emerging points of discussion include:
Stakeholders who have had to get to grips with the GDPR will find many of the concepts in the Regulation familiar. From the risk-based approach, to the requirements around transparency and information provision, as well as record-keeping, territorial scope and enforcement, cybersecurity and data governance, there are recognisable requirements.
The Regulation will not apply in the UK but its wide territorial scope means it will impact UK businesses placing AI products on the EU market, using them in the EU, and providing output from AI which is used in the EU.
It remains to be seen both how the UK develops its own framework, and whether the EC Regulation changes as it moves through the legislative process. As with the GDPR, the Regulation is likely to benefit from supplementary guidance likely to be supplied in time by the planned EU-level regulatory board.
What does the Regulation cover?
The AI Regulation is intended to apply to:
What is an AI system?
The Regulation defines an AI system as "software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with".
What AI uses are considered to pose an unacceptable risk?
AI uses which are considered to violate fundamental rights are set out in Title II and will be prohibited. They include placing on the market, putting into service or use of AI systems which:
What is a high-risk AI system?
AI systems which have an adverse impact on people's safety or their fundamental rights are considered high-risk. This includes where:
In addition, Annex III contains a list (which can be amended under certain circumstances) of high-risk AI systems. The list includes 'real-time' and post remote biometric identification systems (including FRT), credit scoring systems, AI systems known to contain bias, AI for recruitment, and systems to assess eligibility for welfare or legal aid (some of these are subject to limited exemptions).
What are the proposed rules on high-risk AI systems?
High-risk AI systems will be subject to a number of requirements including:
Various stakeholders including providers, importers, distributors and users of high-risk systems are subject to individual requirements including:
Notifying authorities and notified bodies
Member States are required to designate or establish a notifying authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring. The Regulation sets out the process for conformity assessment and for evidencing it through CE marking, declarations of conformity and certification.
Regulators and enforcement
The EU will set up an EU database for stand-alone high-risk AI systems and establish a European Artificial Intelligence Board comprising Member State representatives from the relevant national supervisory authority (to be designated or created by each Member State). National regulators will have the power to impose sanctions for non-compliance, including penalties of up to between 2 and 6% of annual global turnover.
Innovation
Title V of the Regulation sets out measures to support innovative development of AI in the EU, including establishing sandboxes and providing support to small-scale providers and users.
The legislation is in its infancy and may well change as it moves to enactment. With such fast-moving technology, a risk-based approach seems sensible but, as always, we will have to wait for the finalised Regulation to understand the full requirements.
For more on AI and other disruptive tech, see our latest edition of Download.
24 May 2021
by Megan Howarth and Debbie Heywood