20 May 2021
On 21 April 2021, the European Commission ("EU Commission") published its eagerly awaited draft regulation to regulate the use of artificial intelligence ("AI"). The draft regulation, which had already been leaked at an earlier stage, sets out harmonised rules for the development, placing on the market and use of AI systems in the European Union ("EU"). It represents an important step in the comprehensive AI strategy that began in 2018 and has finally moved into the focus of the EU Commission under Ursula von der Leyen's presidency.
With the plans "Artificial Intelligence for Europe" and the "Coordinated Plan for Artificial Intelligence", the attempt to advance digitisation in the EU and make it competitive internationally began more than three years ago. The EU Commission followed up this intention with its digital strategy presented on 19 February 2020: "White Paper on Artificial Intelligence - A European Approach to Excellence and Trust” (“White Paper”) was the name of the document based on these strategy papers, which for the first time ever developed a concept for regulating AI.
According to the EU Commission's ideas, a regulatory framework tailored to the specifics of AI should strengthen society's trust in existing as well as future AI applications. This framework attempts to find its own way of using AI on the basis of European values without blocking the undoubtedly great opportunities of the technology against the background of international competition. After the publication of the White Paper, the EU Commission launched a broad consultation process in which interested parties from all over the world were able to submit comments on the concept, which were then to be taken into account in the further elaboration.
Now, with the Artificial Intelligence Act, the most concrete proposal to date for regulating the use of AI has been presented, which is intended to continue along this path. It follows a risk-based approach already laid out in the White Paper, according to which AI applications are grouped into four categories dependent on their potential risk: "unacceptable risk", "high risk", "low risk" and "minimal risk". The heart of the draft is the comprehensive regulation of AI systems that shows a high risk according to this approach.
Applications of AI with unacceptable risk are prohibited in Art. 5 No. 1. According to the regulation, this includes applications that manipulate human behaviour and can thus harm people (lit. a and b), which according to the Commission could be, for example, a toy with a voice assistant that encourages minors to behave dangerously. However, what is supposed to fall under the broad term of manipulation remains unclear.
Also prohibited are applications that enable authorities to assess the trustworthiness of persons on the basis of their social behaviour or personality-related characteristics and to treat them unfavourably as a result (lit. c). This includes, for example, social credit systems, which are currently already practised in various forms in China and which, in the opinion of the EU Commission, are not compatible with European values.
Finally, the provision prohibits in principle the use of real-time remote recognition systems in public spaces for the biometric identification of persons for the purpose of law enforcement (lit. d). However, several exceptions are provided for in the provision, which could be used regularly in practice: the use of AI would be permitted, for example, for the prevention of terrorism or the detection of serious crimes. This paragraph is likely to prove the most problematic in the further legislative process, as the EU member states have very different ideas about the relationship between freedom and security.
The ban is flanked by a fine provision in Art. 71 No. 3 lit. a, which entails quite severe fines for violations of up to EUR 30 million or 6% of the worldwide annual turnover, or whichever is higher.
The second category of AI applications mentioned in Art. 6 and 7 of the regulation are those that pose a high risk to human health, safety or fundamental rights. The classification as high risk depends not only on the activity performed by the AI system, but also on the purpose for which the system is used.
High-risk AI applications are specified in Annex III of the draft regulation by means of a list that can be updated on an ongoing basis. These include, for example, AI applications for the biometric identification and categorisation of persons (No. 1), the management and operation of critical infrastructure (No. 2), the regulation of access to educational institutions (No. 3), recruiting and personnel management (No. 4) or access to essential private and public services (No. 5). AI systems to support law enforcement (No. 6), migration, asylum and border control (No. 7) and the judiciary (No. 8) also fall under high-risk AI applications. All of those are largely applications that make decisions about people in areas sensitive to fundamental rights and whose use is already possible today – or at least conceivable in the near future.
These applications will not be banned, but in order to be authorised in the European market, they must fulfil strict requirements in an ex-ante assessment, which are described in Art. 8 et seq. The requirements described in these paragraphs are already state of the art today and, according to the Commission, should largely correspond to other international recommendations in order to ensure compatibility in the international context.
Thus, AI systems must be developed on the basis of data that meet certain quality criteria (Art. 10) and achieve an appropriate level of accuracy and security (Art. 15). To this end, a risk management system (Art. 9) and detailed technical documentation (Art. 11) must be established, while automatic logging (Art. 12) must be ensured. High-risk AI systems must also be designed in such a way that their functioning is sufficiently transparent to allow users to interpret and make appropriate use of the system's results (Art. 13) and that they can be effectively monitored by natural persons (Art. 14).
Violations of these requirements are also subject to strict sanctions: Art. 71 No. 4 entails fines of up to EUR 20 million or 4 % of the worldwide annual turnover, whichever is higher; in the case of violations of Art. 10, according to Art. 71 No. 3 lit. b, fines of up to the higher amount of EUR 30 million or 6 % of the worldwide annual turnover may even be imposed.
While the draft regulation provides for strong interventions with the prohibition of systems with unacceptable risk and the extensive regulation of systems with high risk, other AI applications, i.e. those with low or minimal risk, should deliberately remain largely unregulated according to the intention of the EU Commission in order to create innovation-friendly conditions.
The majority of AI applications are classified by the EU Commission as of minimal risk and are not covered by the draft at all, such as video games, search algorithms or spam filters. In addition, there are supposed to be low-risk applications for which the draft only provides for certain transparency obligations in Art. 52. When using such systems, which include "chatbots" or "deep fakes", users must be made aware that they are interacting with an AI.
Although a minimum consensus on regulation was recognisably implemented with the draft regulation – for example, in its earlier stage, the draft provided for much more far-reaching bans on AI applications to influence and monitor people – it is facing severe criticism from industry associations. The category of high-risk AI applications is deemed to be too broad and prevent innovation in the future, causing Europe to fall even further behind in international competition. This criticism is not unexpected, as the White Paper presented less than a year earlier had already been criticised for the same reasons.
For civil rights activists, on the other hand, the draft does not go far enough: they criticise that far too few applications are subject to a ban in Art. 5. For instance, automatic recognition of sensitive characteristics such as gender, sexuality and origin should be prohibited and the ban on remote recognition, which currently only covers real-time remote recognition systems and provides for numerous exceptions, should be significantly tightened. Both could not be compatible with European values and an abuse could only be effectively prevented by a ban.
Despite this criticism, the EU Commission has laid a foundation stone for the regulation of AI with the proposed regulation, which, like the General Data Protection Regulation, has the potential to develop into an international "blueprint".
Algorithms and AI applications already determine our everyday lives – even if we don't even notice it in some cases – and they are increasingly being used in areas sensitive to fundamental rights. It's no longer just search engines and chatbots; in some parts of the world, the use of AI has long been commonplace in insurance companies, human resources departments and even used as a substitute for judges. It is not difficult to predict that the scope of application of AI systems will increase in the future.
It is therefore initially quite positive that the EU is developing a regulation of this technology. However, as the criticism from various sidesof the argument shows, the regulation of AI must leave enough room for innovation and enable AI applications in everyday areas, but at the same time offer effective protection in areas sensitive to fundamental rights, while also makeing rules enforceable. This will not be achieved with rigid bans on AI applications. The path chosen by the EU Commission with the draft regulation of a more flexible, risk-based approach and the focus on strict regulation of high-risk AI systems is fundamentally positive – as long as simultaneously enough freedom is granted for less harmful applications. In conclusion, a carefully balanced classification of AI systems and sensible regulations, compliance with which can also be checked, will be decisive in ensuring that this isas successful, as anticipated.
In any case, there is no doubt that the EU still has a legislative mountain to climb until a final regulation can be reached: the draft must now pass through the European Parliament and the Council of the European Union in the legislative process, which will not proceed without amendments and most likely not until after months of tough wrangling.
We take a closer look at three recent developments in the USA’s regulatory landscape