Autor

Debbie Heywood

Senior Counsel – Knowledge

Read More
Autor

Debbie Heywood

Senior Counsel – Knowledge

Read More

24. Mai 2021

Radar - May 2021 – 2 von 3 Insights

EU moves to ban high-risk AI

What's the issue?

There are few emerging technologies as revolutionary as AI which presents tremendous opportunities but poses potentially dystopian risks.  Regulating its development and use to ensure ethical standards are maintained, without stifling innovation, is a huge challenge.

What's the development?

The EC has published:

  • a communication and proposed Regulation providing a legal framework for AI
    a coordinated plan with Member States on AI
    a proposal for a Regulation on machinery products intended to adapt safety rules to enhance user trust in new generation products.

The draft AI Regulation sets out proposals for:

  • harmonised rules for developing AI systems, placing them on the market, and using them in the EU
  • prohibitions on certain types of AI
  • requirements for high-risk AI systems and for their operators
  • transparency rules for AI systems
  • a regulatory system and enforcement
  • stimulating the development of AI systems.

The Regulation takes a risk-based approach to AI systems. Some types of AI as set out in Title II, are considered to carry unacceptable risk and are prohibited. High-risk AI systems, their providers, importers, distributors, and in some case users, are subject to a range of obligations. Limited risk systems are subject to transparency requirements, and minimal risk systems are subject to existing legislation but sectors may develop or adhere to codes of conduct to help foster trust.

The draft has been opened for public consultation and now goes to the Council and Parliament for discussion so it has a long way to go before it is finalised and may well be subject to considerable changes. Once adopted, Member States will have two years to apply the majority of the Regulation although some provisions will apply sooner.

What does this mean for you?

Emerging points of discussion include:

  • difficulty with the definitions (in particular, the definition of AI systems)
  • how to differentiate between high and low risk systems
  • lack of clarity in relation to some of the user obligations which must be determined on a case by case basis
  • lack of a one-stop-shop regulatory mechanism which could lead to a lack of harmonisation
  • whether the measures go too far and will stifle innovation
  • whether they don't go far enough, for example, there are notable omissions for military AI, and exemptions around mass surveillance.  The European Data Protection Supervisor, among others, has expressed disappointment at the lack of an outright an on the use of AI-driven remote biometric identification in public spaces.

Stakeholders who have had to get to grips with the GDPR will find many of the concepts in the Regulation familiar. From the risk-based approach, to the requirements around transparency and information provision, as well as record-keeping, territorial scope and enforcement, cybersecurity and data governance, there are recognisable requirements.

The Regulation will not apply in the UK but its wide territorial scope means it will impact UK businesses placing AI products on the EU market, using them in the EU, and providing output from AI which is used in the EU.  

It remains to be seen both how the UK develops its own framework, and whether the EC Regulation changes as it moves through the legislative process. As with the GDPR, the Regulation is likely to benefit from supplementary guidance likely to be supplied in time by the planned EU-level regulatory board.

Find out more

What does the Regulation cover?

The AI Regulation is intended to apply to:

  • providers placing AI systems on the EU market or putting them into service, regardless of where they are based
  • users of AI systems located in the EU
  • providers and users of AI systems located in a third country where the output produced by the system is used in the EU.

What is an AI system?

The Regulation defines an AI system as "software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with".

What AI uses are considered to pose an unacceptable risk?

AI uses which are considered to violate fundamental rights are set out in Title II and will be prohibited. They include placing on the market, putting into service or use of AI systems which:

  • deploy subliminal techniques beyond a person's consciousness to materially distort a person's behaviour in a manner that causes or is likely to cause the person or another person physical or psychological harm
  • exploit vulnerabilities of a specific group due to their age, physical or mental disability, to materially distort the behaviour of a person in that group in a manner which is likely to cause them or another person physical or psychological damage
  • evaluate the trustworthiness of individuals over a period of time based on their social behaviour or predicted personal or personality characteristics to establish a social score which leads to detrimental or unfavourable treatment of an individual or group in contexts unrelated to those in which the data was originally collected, or that is unjustified or disproportionate to their social behaviour or its gravity
  • real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes except (but subject to additional requirements) where strictly necessary to achieve: targeted searches for potential victims of crime including missing children; prevention of specific, substantial and imminent threat to life or physical safety, or of a terrorist attack; or detection, localisation, identification or prosecution of someone suspected of a criminal offence carrying a custodial sentence of at least three years.

What is a high-risk AI system?

AI systems which have an adverse impact on people's safety or their fundamental rights are considered high-risk. This includes where:

  • the AI system is intended to be used as a safety component of a product or is itself a product covered by legislation listed in Annex II (broadly product safety legislation)
  • the product whose safety component is the AI system or the AI system itself as a product is required to undergo a third-party conformity assessment with a view to placing it on the EU market or putting the product into service, pursuant to the legislation in Annex II.

In addition, Annex III contains a list (which can be amended under certain circumstances) of high-risk AI systems. The list includes 'real-time' and post remote biometric identification systems (including FRT), credit scoring systems, AI systems known to contain bias, AI for recruitment, and systems to assess eligibility for welfare or legal aid (some of these are subject to limited exemptions).

What are the proposed rules on high-risk AI systems?

High-risk AI systems will be subject to a number of requirements including:

  • the creation of a continuous iterative risk management system to run for the lifecycle of the AI system
  • data and data governance requirements for systems trained on data to ensure the data meets relevant quality criteria
  • to create technical documentation to meet certain requirements
  • record keeping
  • transparency and information provision
  • human oversight
  • accuracy, robustness and cybersecurity.

Various stakeholders including providers, importers, distributors and users of high-risk systems are subject to individual requirements including:

  • to ensure the AI systems are compliant with the specified requirements, and undergo the relevant conformity procedure before they are placed on the market or in service
  • to affix CE marking to the systems to indicate conformity with the Regulation.

Notifying authorities and notified bodies

Member States are required to designate or establish a notifying authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring. The Regulation sets out the process for conformity assessment and for evidencing it through CE marking, declarations of conformity and certification.

Regulators and enforcement

The EU will set up an EU database for stand-alone high-risk AI systems and establish a European Artificial Intelligence Board comprising Member State representatives from the relevant national supervisory authority (to be designated or created by each Member State). National regulators will have the power to impose sanctions for non-compliance, including penalties of up to between 2 and 6% of annual global turnover.

Innovation

Title V of the Regulation sets out measures to support innovative development of AI in the EU, including establishing sandboxes and providing support to small-scale providers and users.

Next steps

The legislation is in its infancy and may well change as it moves to enactment.  With such fast-moving technology, a risk-based approach seems sensible but, as always, we will have to wait for the finalised Regulation to understand the full requirements.

For more on AI and other disruptive tech, see our latest edition of Download.

In dieser Serie

Technology, Media & Communications

Out of harm's way? Online Safety Bill published

24. May 2021

von Debbie Heywood

Technology, Media & Communications

EU moves to ban high-risk AI

24. May 2021

von Debbie Heywood

Technology, Media & Communications

How should the law treat cryptoassets and other digital assets?

24. May 2021

von Debbie Heywood

Call To Action Arrow Image

Newsletter-Anmeldung

Wählen Sie aus unserem Angebot Ihre Interessen aus!

Jetzt abonnieren
Jetzt abonnieren

Related Insights

Gaming

Play

9. April 2024

von mehreren Autoren

Klicken Sie hier für Details
Technology, Media & Communications

EC Directive on Empowering Consumers for the Green Transition enters into force

21. März 2024
Quick read

von Debbie Heywood

Klicken Sie hier für Details
Technology, Media & Communications

Law Commission consults on regulation of autonomous aviation

21. März 2024
Briefing

von Debbie Heywood

Klicken Sie hier für Details