19 November 2024
The rapid advancement of generative artificial intelligence has heralded a new era of content creation and service delivery. However, with this significant power comes substantial responsibility. The rise of such technologies has generated serious concerns regarding content safety, leading regulatory bodies worldwide to formulate guidelines to ensure the ethical and secure deployment of AI services. Among these, China's Basic Requirements for Generative AI Service Security (Basic Requirements) promulgated on 1 March, 2024, emerge as a comprehensive framework designed to address these pressing concerns.
The Basic Requirements adopt a granular approach to ensuring the safety of generative AI services. They delineate a series of obligations for service providers, covering critical areas such as data sourcing, content safety, model security, and safety measures. Notably, Appendix A categorises 31 distinct types of safety risk that may arise from AI-generated content. These range from violations of socialist core values and discrimination to commercial violations, infringements of legal rights, and the failure to meet security needs specific to various service types.
A pivotal aspect of the Basic Requirements is the strong emphasis on the traceability and legality of training data sources. Service providers are mandated to perform both pre- and post-collection safety assessments to ensure that the data employed does not contain more than 5% of illegal or harmful information. This dual assessment mechanism represents a proactive strategy aimed at reducing the risks associated with biased or toxic AI training datasets.
The Basic Requirements underline the need for content safety by obliging service providers to implement robust filtering mechanisms. These measures include keyword blacklists, classification models, and manual spot checks to proactively eliminate illegal or inappropriate content from AI-generated outputs. This aligns with the global movement toward responsible AI development, whereby creators and providers are responsible for preventing the dissemination of harmful content.
The Basic Requirements also address the protection of intellectual property rights concerning AI-generated content. They stipulate the appointment of dedicated personnel to oversee IP-related issues and facilitate third-party inquiries regarding the use of copyrighted material. This provision is particularly critical in the domains of AI-generated art and literature, where the distinctions between original creations and derivative works can often become blurred.
Additionally, the document introduces the concept of ‘opt-out’ consent for utilising user-generated content in AI training which might touch on privacy issues. While this approach streamlines the collection of diverse datasets, it raises important questions about the soundness of consent mechanisms and the potential risk of a privacy breach. Striking a balance between leveraging user engagement and safeguarding individual rights remains a complex challenge.
The Basic Requirements further advocate for the regular updating of keyword libraries and test databases to remain adaptable to the evolving landscape of AI and internet governance. This dynamic approach is essential to keep pace with the rapid technological and societal changes that influence how AI interacts with users and the broader community.
For service providers, the Basic Requirements present both challenges and opportunities. On one hand, they require the development of sophisticated content moderation and data management systems. On the other, they provide a clear roadmap for enhancing the credibility and reliability of AI services which can foster greater user trust and confidence.
In conclusion, as generative AI continues to permeate various sectors, the Basic Requirements offer a comprehensive blueprint for navigating the complexities of AI content safety. By addressing the root causes of potential harms and providing clear guidelines for service providers, these requirements not only promote the ethical development of AI but also pave the way for a safer and more responsible digital ecosystem. This initiative reflects a growing global recognition of the necessity for robust regulatory frameworks to govern the rapidly evolving landscape of AI.
If you need more insight, please contact our TMC/IP team.