2 August 2023
As a major player in the field of artificial intelligence (AI), China is also joining the race to find a proper way to balance between promoting the development of AI and keeping AI technologies under good control. From the outset and somewhat differently from the EU, China has taken a more vertical or narrow approach to creating relevant legislation, focusing more on specific issues. For example, the previous Provisions on the Administration of Deep Synthesis Internet Information Services, which came into effect on January 10, 2023, regulate in detail the supervision and control of content and compliance with regulatory requirements for all services of deep synthesis technology, i.e., technology that uses deep learning, virtual reality or any other generative or synthetic algorithms to produce text, images, audio, video, virtual scenes or other network information.
The now official launch of the Interim Measures for the Administration of Generative Artificial Intelligence Services on July 10, 2023 (“Measures”) is another firm step in this direction. The Measures will take effect on August 15, 2023. The joint promulgation by seven powerful national-level ministries, led by the Cyberspace Administration of China (CAC), shows the weight of these new measures. As it is not so often that so many regulators align to formulate a single set of rules, enforcement cases can also be expected in the near future. The Measures cover a number of AI-related hot topics and are quite comprehensive, governing not only the provision of services that use AI technology to generate any text, image, audio, video or other content to the public within the territory of the People's Republic of China, but potentially also the use of such services. Below we would like to briefly summarize some highlights worth noting by international companies.
The European Union has already proposed its landmark rules for AI (most importantly the EU AI Act). A key feature of this framework is to lay down a four-tier classification of AI systems with different requirements and obligations, tailored to a “risk-based approach” for risks graded from unacceptable, high, limited to minimal. A similar approach is reflected in Article 3 of the Measures, which stipulates that generative AI services shall be regulated in an inclusive and prudent manner, with classification and grading applied.
On the other hand, the EU AI Act has provided for a more detailed classification system, namely the prohibited band due to unacceptable risk, the high risk band triggering conformity assessment, the limited risk band subject to more transparency requirements, and the minimal risk band to be regulated mainly by voluntary code of conduct. AI systems of high risks are of particular concern to the EU, where practical examples of each band have already been made known. For example, remote biometric identification systems remain explicitly high risk, and their use by public authorities may even be prohibited.
The Measures in comparison, with a total of 24 articles, do not go into such detail. While Article 16 does require the relevant competent state authorities to formulate appropriate classification and grading regulatory rules or guidelines (implementing a system of “classified and graded supervision”), taking into account the characteristics of generative AI technology and its service applications in relevant industries and fields, it does not yet provide specific details on the classification and grading of generative AI services. A strong “do and learn” feature can be seen in the same article, which tasks the regulators with “improving a scientific way of supervision and management that is in line with the development of innovation”.
As China is a global leader in AI research and development, the Measures address concerns over the potential risks associated with the misuse of generative AI systems and provide general principles for the use of generative AI technologies. Many of these principles are similar to those under the EU AI Act as well as the general AI governance principles.
For example, Article 4 of the Measures stipulates, among others, that offering service of or using generative AI technologies shall follow laws and regulations, respect public morals and ethics, and observe the rules such as:
It should be noted that, according to the wording of Article 4 of the Measures, the above principles apply not only to providers of generative AI services, but also to the use of such services.
This may create some confusion as to the scope of application of the Measures in the case of research and development without yet a specific service provided to the public in China. Article 2 of the Measures stipulates that only those who provide generative AI services to the public in the PRC shall be subject to the Measures. Literally interpreted, the application scope does not cover industrial organizations, enterprises, educational and scientific research institutions, public cultural institutions, relevant professional institutions, etc. that conduct research and development work by applying generative AI technology but do not provide generative AI services to the public in the PRC.
Irrespective of any inconsistencies or open questions on the scope of the Measures, it is reasonable to expect that the application of the Measures will likely be interpreted in a broad rather than a narrow sense. It is worth noting that the wording of Articles 2 and 4 of the Measures indicates that no exemption applies to Chinese providers of AI services that incorporate overseas AI solutions into their own products and services provided in China, which in turn can lead to the application of the Measures to AI solutions developed outside of China. This should be carefully taken into account when structuring cooperation and development agreements with Chinese partners for AI-related services to be offered in China.
Generative AI is a data-rich area, as the pre-training of models relies heavily on large amount of data. In this aspect, the Measures impose obligations on a service provider of generative AI. The AI training activities of a service provider, such as pre-training and optimization training, shall be conducted in accordance with laws and follow the requirements below:
Article 9 of the Measures stipulates that a service provider will be deemed as a personal information handler (akin to a controller under EU law) under the PIPL. A service provider is further required to protect the information uploaded by a data subject, including her/his use record, to minimize the data collection, to retain the data in a legally allowed way, and not to illegally share personal data with others. Any request by a data subject for review, copying, correction, supplement, or deletion shall also be respected and satisfied.
In short, generative AI is subject to existing privacy and data protection legal frameworks such as the PIPL, which means that a good and transparent data governance structure is highly recommended to ensure compliance in this aspect.
In addition to the general principles in Article 4 summarized above, the same article reiterates the most critical principle that can also be found in many other Chinese laws and regulations, i.e., the protection of national security and state interests, such as
In relation to obligations of algorithm transparency, Article 4 of the Measures obliges one to take “effective measures” to “increase the transparency in generative AI services and improve the accuracy and reliability of generated content”, but does not provide specific criteria for how such transparency will be graded and judged in the event of a conflict.
A service provider is further required to take care of the content generated by its services.
In case of any illegal content, it shall take timely measures to stop the generation and transmission of the content, delete the respective content, correct it by optimizing the training of the models, and report it to the competent authorities. It shall have a good service agreement to specify the rights and obligations between the parties, which, as an obligation under the Measures, must cover its right to stop the services and report the case to the regulators if it discovers any illegal activities of the users.
The launch of the Measures – though only having 24 articles - is a significant step forward in regulating and building trust in AI technologies in the country. The general tone of the policy preference is still to encourage rather than restrict the development of AI, but subject to strong content control and an elevated level of liability for service providers. Article 5 of the Measures states that the innovative application of generative AI in various industries and fields will be encouraged, while a corresponding ecosystem shall be built up, while Article 6 of the Measures emphasizes the support for “indigenous innovation”. The country will support the cooperation of various players in generative AI technology innovation, data resource construction, commercialization and application, and risk prevention.
According to the State Council’s Next Generation Artificial Intelligence Development Plan promulgated on July 20, 2017, the country’s plan by 2025 is to establish a preliminary legal, ethical and policy framework to regulate AI. The Plan also mentions that by 2025, China aims to achieve a major breakthrough in the basic theories of AI and a leading position in some technologies and applications, with AI becoming a major booster for industrial upgrading and economic pattern change.
Thus, China’s legislation since 2023 indicates the beginning of a race by regulators to create an environment that is ultimately favorable to more AI services and products. Companies and institutions can look at the benefits in each system, while making sure to comply with the new regulatory framework.