2 / 9


KI-Verordnung / AI Act (dt./eng.) – 2 / 9 观点

Prohibited practices under the draft AI Act – Does the European Commission want to ban Instagram?

  • Briefing

Stephan Manuel Nagel, LL.M. (EUI)


Read More


On 21 April 2021 the European Commission finally presented its Proposal for a Regulation laying down harmonised rules on artificial intelligence (COM/2021/206, available here, (to be referred to in this article as the “Draft AI Act”). In doing so, it intends to take on a global pioneering role in the regulation of artificial intelligence systems. With this worldwide first concrete draft law for the regulation of artificial intelligence, the Commission seeks to perform a regulatory balancing act. As Commission Vice-President and Competition Commissioner Vestager emphasises, the planned AI regulation aims to establish Europe as a “global centre for trustworthy artificial intelligence (AI)” in the future, thereby safeguarding the diverse socio-economic benefits from these rapidly developing technologies, while at the same time ensuring the protection of fundamental rights and the security of EU citizens.[1] The legislative challenge therefore lies in creating a balanced regulatory framework that avoids stifling innovation and growth through bureaucratic overregulation and excessive prohibitions.

Structure and design of the AI Act

The Draft AI Act links the intensity of regulatory intervention to the level of threat to fundamental rights and security of citizens posed by the respective use of an AI system. In doing so, it adopts a multi-level risk-oriented classification.

Particularly harmful practices are to be banned altogether. The so-called high-risk AI systems are to be subject to comprehensive quality and risk control. Finally, certain types of AI systems (e.g. deep fakes) are to be subject to transparency and labelling obligations.

This article takes a closer look at the prohibited practices.

Prohibited practices

The Draft AI Act contains a total of four prohibited practices in Art. 5. The prohibitions in Art. 5 (1) (a) and Art. 5 (1) (b) Draft AI Act concern manipulation of behaviour and are directed at private entities, while the prohibitions on the use of AI for purposes of so-called “social scoring” in Art. 5 (1) (c) Draft AI Act and on biometric real-time remote identification in publicly accessible spaces for law enforcement purposes in Art. 5 (1) (d) Draft AI Regulation concern state action or action on behalf of the state.

Violations of these prohibitions in Art. 5 Draft AI Act may result in fines of up to EUR 30 million or, in the case of companies, fines of up to 6 percent of the total worldwide turnover of the previous business year (whichever is higher).

Manipulation of behaviour

According to Art. 5 (1) (a) and Art. 5 (1) (b) Draft AI Act, those AI systems are to be prohibited that either use techniques that unconsciously influence persons or exploit the weakness or vulnerability of a certain group of persons related to age, physical or mental disability in order to substantially influence the behaviour of such persons in a way that may cause psychological or physical harm to these persons or to others.

The particularly broad wording of this provision may also include generally accepted AI applications and a question which arises is whether the European Commission actually intends to create such a broad scope of application. The application of this prohibition to the social network Instagram could not be ruled out based on a literalist construction of the provision.[2]

The broad definition of the term “artificial intelligence system” in Art. 3 No. 1 Draft AI Act includes social networks such as Instagram. In addition, internal studies by Facebook have allegedly shown that Instagram can cause a negative body image, eating disorders and even thoughts of physical self-harm or suicide among young teenage users.[3]

However, having regard to the personal responsibility of users (and their parents), such an interpretation of the prohibition seems too broad. It should thus be contemplated whether the element of “weakness or need for protection of a certain group of persons due to their age or physical or mental disability” should be formulated more narrowly, or even deleted altogether, based on the notion of the autonomous citizen.

Social Scoring

According to Art. 5 (1) (c) Draft AI Act, the use of AI for “social scoring” purposes is to be prohibited.

“Social scoring” AI refers to systems used to assess or classify the trustworthiness of natural persons, where the assessment leads either to disadvantage in social contexts that are in turn unrelated to the circumstances in which the data were originally collected, or to disadvantage individuals in a manner that is unjustified or disproportionate in terms of their social behaviour.

An example would be the social credit system already in place in China, implemented through the use of AI. The use of AI enables the Chinese authorities to monitor citizens’ behaviour in all areas of life, collect data and then assign each individual a corresponding “social score”. For example, crossing the street at a red light can lead to a person’s credit rating being downgraded[4] – that is, to a disadvantage in social contexts unrelated to the circumstances under which the data was originally collected.

This ban would therefore indeed be a significant step by the European legislator to safeguard fundamental liberties and the privacy of EU citizens in the face of an already real threat from “social scoring” AI.

Biometric real-time remote identification

The use of AI systems for biometric real-time remote identification in publicly accessible spaces for law enforcement purposes is also to be, in principle, prohibited (Art. 5 (1) (d) Draft AI Act). This prohibition is not limited to certain identification systems (e.g. based on real-time remote identification of faces), but instead covers all AI of this type (e.g. also those based on recognition of the individual gait, so-called “gaiting”).

However, the use of AI for biometric real-time remote identification in publicly accessible areas for law enforcement purposes is to be permitted, if (i) the use is absolutely necessary for a purpose specified in Art. 5 (1) (d) (i) – (iii) Draft AI Regulation, (ii) the requirements for proportionality set out in Art. 5 (2) Draft AI Regulation are met, and (iii) prior authorisation by a judicial authority or an independent administrative authority of the Member State has already been obtained in advance for that individual case, or, in particularly urgent cases of imminent danger, has been subsequently obtained.

The permissible purposes for which AI-based real-time remote identification may exceptionally be used in publicly accessible spaces are (i) the search for specific potential victims of crime or missing children, (ii) the prevention of concrete and acute danger to the life or physical integrity of natural persons or of a terrorist attack, and (iii) the prosecution of suspects of certain criminal offences for which the maximum punishment available is imprisonment for a minimum of three years.

According to Article 5 (2) of the draft AI Regulation, the use for these purposes must additionally satisfy a proportionality test. In particular, the severity, likelihood and extent of the threat of harm if the AI is not used must be weighed against the severity, likelihood and extent of the consequences of the use of the AI for the rights and freedoms of all affected individuals. Finally, the use of AI in publicly accessible spaces for law enforcement purposes must be limited to what is strictly necessary in terms of time, space and personnel, and appropriate safeguards must be introduced.

The corresponding use of AI is to be legally regulated by the individual Member States (Art. 5 (4) AI Act.).


In conclusion, it can be said that the Draft AI Act is a milestone in the protection of civil liberties against possible threats from artificial intelligence. This applies in particular to the absolute ban on social scoring and the fundamental ban on the use of AI systems for biometric real-time remote identification in publicly accessible spaces for law enforcement purposes. However, the broad wording of the draft seems to overshoot the mark in part, especially with regard to behaviour-manipulating AI, and calls into question once again the guiding principle of the free and responsible citizen, as the Instagram example shows. In this respect, clarifying restrictions would be welcomed.

[1] European Commission, PM dated 21.4.2021, https://ec.europa.eu/germany/news/20210421-kuenstliche-intelligenz-eu_de.

[2] Also see Stieler, „Experte warnt: Instagram könnte als Risiko-Anwendung eingestuft werden – Interview with Stephan Manuel Nagel“, MIT Technology Review dated 12.05.2021, available under https://www.heise.de/hintergrund/Experte-warnt-Instagram-koennte-als-Risiko-Anwendung-eingestuft-werden-6043779.html.[3] See Mäder, Fulterer, „Plattform für Menschenhandel, Schäden bei Teenagern, falsche Nutzerzahlen: Das sind die Vorwürfe der Facebook-Whistleblowerin“, Neue Züricher Zeitung vom 05.10.2021, available under

[4] Siemons, „Chinas Sozialkreditsystem – Die Totale Kontrolle“, Frankfurter Allgemeinen Zeitung dated 11.05.2018, available under https://www.faz.net/aktuell/feuilleton/debatten/chinas-sozialkreditsystem-die-totale-kontrolle-15575861.html



前往 Interface主页