Auteurs

Dr. Michael Tan

Associé

Read More

Dr. Thomas Pattloch, LL.M.Eur

Associé

Read More
Auteurs

Dr. Michael Tan

Associé

Read More

Dr. Thomas Pattloch, LL.M.Eur

Associé

Read More

2 août 2023

– 3 de 5 Publications

Implications of China’s New Generative AI Rules

  • In-depth analysis

As a major player in the field of artificial intelligence (AI), China is also joining the race to find a proper way to balance between promoting the development of AI and keeping AI technologies under good control. From the outset and somewhat differently from the EU, China has taken a more vertical or narrow approach to creating relevant legislation, focusing more on specific issues. For example, the previous Provisions on the Administration of Deep Synthesis Internet Information Services, which came into effect on January 10, 2023, regulate in detail the supervision and control of content and compliance with regulatory requirements for all services of deep synthesis technology, i.e., technology that uses deep learning, virtual reality or any other generative or synthetic algorithms to produce text, images, audio, video, virtual scenes or other network information.

The now official launch of the Interim Measures for the Administration of Generative Artificial Intelligence Services on July 10, 2023 (“Measures”) is another firm step in this direction. The Measures will take effect on August 15, 2023. The joint promulgation by seven powerful national-level ministries, led by the Cyberspace Administration of China (CAC), shows the weight of these new measures. As it is not so often that so many regulators align to formulate a single set of rules, enforcement cases can also be expected in the near future. The Measures cover a number of AI-related hot topics and are quite comprehensive, governing not only the provision of services that use AI technology to generate any text, image, audio, video or other content to the public within the territory of the People's Republic of China, but potentially also the use of such services. Below we would like to briefly summarize some highlights worth noting by international companies.

Risk Classification

The European Union has already proposed its landmark rules for AI (most importantly the EU AI Act). A key feature of this framework is to lay down a four-tier classification of AI systems with different requirements and obligations, tailored to a “risk-based approach” for risks graded from unacceptable, high, limited to minimal. A similar approach is reflected in Article 3 of the Measures, which stipulates that generative AI services shall be regulated in an inclusive and prudent manner, with classification and grading applied.

On the other hand, the EU AI Act has provided for a more detailed classification system, namely the prohibited band due to unacceptable risk, the high risk band triggering conformity assessment, the limited risk band subject to more transparency requirements, and the minimal risk band to be regulated mainly by voluntary code of conduct. AI systems of high risks are of particular concern to the EU, where practical examples of each band have already been made known. For example, remote biometric identification systems remain explicitly high risk, and their use by public authorities may even be prohibited.

The Measures in comparison, with a total of 24 articles, do not go into such detail. While Article 16 does require the relevant competent state authorities to formulate appropriate classification and grading regulatory rules or guidelines (implementing a system of “classified and graded supervision”), taking into account the characteristics of generative AI technology and its service applications in relevant industries and fields, it does not yet provide specific details on the classification and grading of generative AI services. A strong “do and learn” feature can be seen in the same article, which tasks the regulators with “improving a scientific way of supervision and management that is in line with the development of innovation”.

Governance Principles and Scope of Application

As China is a global leader in AI research and development, the Measures address concerns over the potential risks associated with the misuse of generative AI systems and provide general principles for the use of generative AI technologies. Many of these principles are similar to those under the EU AI Act as well as the general AI governance principles.

For example, Article 4 of the Measures stipulates, among others, that offering service of or using generative AI technologies shall follow laws and regulations, respect public morals and ethics, and observe the rules such as:

  • Non-discrimination: taking effective measures to prevent discrimination on ethnicity, faith, country, region, gender, age, occupation, health and other aspects in the design of algorithm, selection of training data, creation and optimization of models, and provision of services.
  • Fair competition: respecting intellectual property rights and business ethics, keeping trade secrets confidential, and not taking advantage of algorithms, data, platforms, etc., to create monopolies and/or conduct unfair competition.
  • Rights of others: respecting the legitimate rights and interests of others, and not endangering the physical and mental health of others, nor infringing upon others’ right of portrait, reputation, honor, privacy and personal information.
  • Transparency and trust: based on the features of the service concerned, taking effective measures to enhance the transparency of the generated AI services and improve the accuracy and reliability of the generated content.

It should be noted that, according to the wording of Article 4 of the Measures, the above principles apply not only to providers of generative AI services, but also to the use of such services.

This may create some confusion as to the scope of application of the Measures in the case of research and development without yet a specific service provided to the public in China. Article 2 of the Measures stipulates that only those who provide generative AI services to the public in the PRC shall be subject to the Measures. Literally interpreted, the application scope does not cover industrial organizations, enterprises, educational and scientific research institutions, public cultural institutions, relevant professional institutions, etc. that conduct research and development work by applying generative AI technology but do not provide generative AI services to the public in the PRC.

Irrespective of any inconsistencies or open questions on the scope of the Measures, it is reasonable to expect that the application of the Measures will likely be interpreted in a broad rather than a narrow sense. It is worth noting that the wording of Articles 2 and 4 of the Measures indicates that no exemption applies to Chinese providers of AI services that incorporate overseas AI solutions into their own products and services provided in China, which in turn can lead to the application of the Measures to AI solutions developed outside of China. This should be carefully taken into account when structuring cooperation and development agreements with Chinese partners for AI-related services to be offered in China.

Use of Data

Generative AI is a data-rich area, as the pre-training of models relies heavily on large amount of data. In this aspect, the Measures impose obligations on a service provider of generative AI. The AI training activities of a service provider, such as pre-training and optimization training, shall be conducted in accordance with laws and follow the requirements below:

  • use data and base models where the sources are legitimate,
  • not infringe upon the IP rights of others relating to the concerned data,
  • obtain the consent of the data subjects when personal information is involved, unless there is a sound legal reason not to do so,
  • take effective measures to improve the quality of training data and to enhance the authenticity, accuracy, objectivity and diversity of training data, and
  • comply with other statutory requirements, e.g., the Cybersecurity Law, the Data Security Law, and the Personal Information Protection Law (PIPL), as well as other regulatory requirements.

Article 9 of the Measures stipulates that a service provider will be deemed as a personal information handler (akin to a controller under EU law) under the PIPL. A service provider is further required to protect the information uploaded by a data subject, including her/his use record, to minimize the data collection, to retain the data in a legally allowed way, and not to illegally share personal data with others. Any request by a data subject for review, copying, correction, supplement, or deletion shall also be respected and satisfied.

In short, generative AI is subject to existing privacy and data protection legal frameworks such as the PIPL, which means that a good and transparent data governance structure is highly recommended to ensure compliance in this aspect.

Content Requirements

In addition to the general principles in Article 4 summarized above, the same article reiterates the most critical principle that can also be found in many other Chinese laws and regulations, i.e., the protection of national security and state interests, such as

  • adhere to the core socialist values,
  • refrain from generating content that incites subversion of state power, overthrow of the socialist system, endangers national security and interests, tarnishes the country’s image, incites splitting the country, undermines national unity and social stability, promotes terrorism, extremism, ethnic hatred, ethnic discrimination, violence, obscenity and pornography, and
  • refrain from generating false and harmful information prohibited by laws and regulations.

In relation to obligations of algorithm transparency, Article 4 of the Measures obliges one to take “effective measures” to “increase the transparency in generative AI services and improve the accuracy and reliability of generated content”, but does not provide specific criteria for how such transparency will be graded and judged in the event of a conflict.

A service provider is further required to take care of the content generated by its services.

In case of any illegal content, it shall take timely measures to stop the generation and transmission of the content, delete the respective content, correct it by optimizing the training of the models, and report it to the competent authorities. It shall have a good service agreement to specify the rights and obligations between the parties, which, as an obligation under the Measures, must cover its right to stop the services and report the case to the regulators if it discovers any illegal activities of the users.

Where do we stand?

The launch of the Measures – though only having 24 articles - is a significant step forward in regulating and building trust in AI technologies in the country. The general tone of the policy preference is still to encourage rather than restrict the development of AI, but subject to strong content control and an elevated level of liability for service providers. Article 5 of the Measures states that the innovative application of generative AI in various industries and fields will be encouraged, while a corresponding ecosystem shall be built up, while Article 6 of the Measures emphasizes the support for “indigenous innovation”. The country will support the cooperation of various players in generative AI technology innovation, data resource construction, commercialization and application, and risk prevention.

According to the State Council’s Next Generation Artificial Intelligence Development Plan promulgated on July 20, 2017, the country’s plan by 2025 is to establish a preliminary legal, ethical and policy framework to regulate AI. The Plan also mentions that by 2025, China aims to achieve a major breakthrough in the basic theories of AI and a leading position in some technologies and applications, with AI becoming a major booster for industrial upgrading and economic pattern change.

Thus, China’s legislation since 2023 indicates the beginning of a race by regulators to create an environment that is ultimately favorable to more AI services and products. Companies and institutions can look at the benefits in each system, while making sure to comply with the new regulatory framework.

Dans cette série

Technologie, Médias et Communications (TMC)

What games businesses need to consider when drafting a generative AI acceptable use policy

Martijn Loth highlights the top ten considerations to help games businesses mitigate risks associated with using generative AI when developing video games.

31 July 2023

par Martijn Loth

Intelligence artificielle et machine learning

Implications of China’s New Generative AI Rules

2 August 2023

par Dr. Michael Tan, Dr. Thomas Pattloch, LL.M.Eur

Restructuration et insolvabilité

Landmark insolvency decision in Hong Kong on treatment of cryptocurrencies

1 August 2023

par plusieurs auteurs

Call To Action Arrow Image

Latest insights in your inbox

Subscribe to newsletters on topics relevant to you.

Subscribe
Subscribe