In the European Union, progress on the draft Artificial Intelligence Act (AI Act) is ramping up following the Council's position published on 6 December 2022. Once the European Parliament finalises its own provisionally agreed negotiating position, trilogues can begin and the hope is that the legislation will be agreed by the end of 2023. The EU has chosen to regulate AI in its own right from the top down, whereas the UK is taking a sector-based approach as we discuss here.
In this article, we take a look at the bigger picture and at what other countries are planning in terms of AI regulation.
Brazil
Brazil is working on its first law to regulate AI. On 1 December 2022, a non-permanent jurisprudence commission of the Brazilian Senate presented a report with studies on the regulation of AI, including a draft for the regulation of AI. The draft now serves as a starting point for the Senate's further deliberations on new AI legislation. According to the committee's rapporteur, AI regulation is based on three central pillars: guaranteeing the rights of people affected by the system, classifying the level of risk and predicting governance measures for companies that provide or operate the AI system.
AI definition
The draft has clear parallels with the EU's draft AI Act. The definition of AI systems is quite similar to the EC's draft one (which is still being debated). AI is defined as "a computer system with varying degrees of autonomy that is designed to use approaches based on machine learning and/or logic and knowledge representation to infer from input data from machines or humans how to achieve a specific set of goals, with the aim of making predictions, recommendations or decisions that can influence the virtual or real environment".
Risk classification
Like the AI Act, the draft sets out risk categories and corresponding obligations. Prohibited AI systems include systems that exploit vulnerabilities of certain groups of natural persons if these techniques or the exploitation of the vulnerabilities are intended to impair or impair the health or safety of the end user. Social scoring by public bodies is prohibited, as is the use of biometric identification systems in publicly accessible spaces, unless there is a specific law or court order explicitly allowing the use of such systems (e.g. for the prosecution of criminal offences).
Like the AI Act, the Brazilian draft contains an enumeration of high-risk systems, which include AI systems used in various areas sensitive to fundamental rights, such as critical infrastructure, education and vocational training, recruiting, use of autonomous vehicles or biometric identification. This list can be adapted by an authority. High-risk systems will be included in a publicly accessible database.
Data subject rights
The draft grants data subjects rights against providers and users of AI systems, regardless of the risk-rating of the AI system. These include the right to information about their interactions with an AI system prior to its use, the right to an explanation of a decision made by an AI system within 15 days of the request, the right to challenge decisions made by AI systems that have legal effects or significantly affect the interests of the party concerned, the right to human intervention in decisions made solely by AI systems, the right to non-discrimination and correction of discriminatory biases, and the right to privacy and protection of personal data.
Governance, liability and sanctions
Like the AI Act, the draft also regulates governance measures. For example, providers and users of AI systems must establish structures and internal processes that ensure the safety of AI systems. More stringent measures must be taken in relation to high-risk AI, such as conducting an AI impact assessment which must be made publicly available and, if necessary, repeated at regular intervals.
In addition, the draft contains an obligation on providers and users to inform the competent authority about serious security incidents as well as regulations on civil liability. Sanctions for non-compliance depend on the violation, but maximum fines of up to 50 million Brazilian reals (about 9 million euros) or 2 % of a company's turnover can be imposed.
China
The Chinese State Council first established the "Next Generation Artificial Intelligence Development Plan" in 2017. In 2021, ethical guidelines for dealing with AI were published. Then, in January 2022, China published two laws relating to specific AI applications. While the provisions on the management of algorithmic recommendations of internet information services (Algorithm Provisions) have been in force since March 2023, the provisions on the management of deep synthesis of internet information services (Draft Deep Synthesis Provisions) are still at the draft stage.
Algorithm Provisions
These regulations address the abuse of algorithmic recommendation systems. To this end, they include provisions on content management, tagging or labelling, transparency, data protection and fair practices. Additional regulations apply in certain areas - for example with regard to minors or e-commerce services. Fines of between 10,000 and 100,000 RMB (equivalent to about 1,570 to 15,705 US dollars) may be imposed for non-compliance.
Draft Deep Synthesis Provisions
These provisions are intended to regulate so-called "deep synthesis" technologies, in particular to combat deep fakes. With the exception of the articles on fair practices, the law covers all the aspects mentioned above. In addition, certain obligations for online app store operators are included. Maximum penalties are the same as with the Algorithm Provisions.
In addition, China's Cyberspace Administration (CAC) is closing its consultation on the draft Administrative Measures for Generative Artificial Intelligence Services on 10 May 2023. The draft regulation stipulates that new AI products developed in China must undergo a "safety assessment" before being released to the public. Specifically, the regulation requires AI-generated content to be truthful and accurate, and prohibits content that undermines state power or contains terrorist or extremist propaganda, violence, obscene and pornographic information, ethnic hatred, discrimination, or other content that could disrupt economic and social order. The regulation requires AI service providers to take measures to prevent the generation of false information and avoid harmful content. If inappropriate content is generated, service providers must update their technology within three months to prevent similar content from being generated again. Providers who do not comply with the regulation may be fined, have their services suspended, or be subject to criminal investigations.
Last, China has regional legislation on AI. On 6 September 2022, the Shenzhen government published China's first city-level AI regulation (Regulations on Promoting artificial Intelligence Industry in Shenzhen Special Economic Zone). Shanghai published a provincial law on AI development on 22 September (Shanghai Regulations on Promoting the Development of the AI Industry), which came into force on 1 October 2022.
Japan
Japan has the second-largest IT sector of the OECD countries and has large investments in research and development. With regard to AI, the strategies and regulations are strongly intertwined with the major project "Society 5.0". Behind this is the ambition to counter social problems (such as the ageing population) with innovation. New technologies should enable a highly efficient, inclusive and leading nation on a social and political level.
The Integrated Innovation Strategy Promotion Council's Social Principles of Human-Human-Centric AI, adopted by the Integrated Innovation Strategy Promotion Council, were published by the Japanese government in March 2019 and manifest the basic principles of an AI-capable society. The first part contains seven social principles that society and the state must respect when dealing with AI: (1) human-centricity, (2) education/literacy, (3) data protection, (4) ensuring safety, (5) fair competition, (6) fairness, accountability and transparency, and (7) innovation.
The second part, R&D and utilisation guidelines, is aimed at AI developers and companies. It was elaborated in more detail in the form of the AI Utilisation Guidelines of 9 August 2019, and is intended to serve as an appeal and at the same time a reference for AI developers and companies to draw up their own guidelines.
In addition, the Governance Guidelines for Implementation of AI Principles (9 July 2021), present action goals and hypothetical implementation examples to be considered by AI companies. These are intended to serve as a comprehensive tool for developers, service providers and companies in the field of AI by referring to several relevant sets of national and international guidelines. The Japanese Ministry of Economy, Trade and Industry has since published the second version (version 1.1) of these guidelines following public consultation.
None of these regulatory measures is legally binding. The successive documents encompass the current political consensus on opportunities and risks related to AI. In terms of content, Japan's AI approach to inclusive growth, sustainable development and societal well-being is in line with the OECD AI Principles.
Canada
On 16 June 2022, the Canadian federal government submitted the draft law C-27, also known as the Digital Charter Implementation Act 2022. Part 3 of the legislative package contains the Artificial Intelligence and Data Act (AIDA), Canada's first AI Act. AIDA aims to regulate international and interprovincial trade in AI systems by requiring certain persons to take measures to reduce the risk of harm and biased outcomes associated with high performance AI systems. It provides for public reporting and empowers the Minister to order the disclosure of records related to AI systems. The Act also prohibits certain practices in the handling of data and AI systems that may cause serious harm to individuals or their interests. Currently (as of March 2023), the Bill is in its second reading in the House of Commons and still needs to be approved by the Senate.
India
In India, there is currently no specific regulatory framework for AI systems. However, some working papers of the Indian Commission NITI Aayog from 2020, 2021 and 2022 are worth mentioning in this context. While these are still rough drafts, they do indicate the government's intention to move forward with AI regulation. The central proposal is the introduction of a supervisory authority, which, among other things, is to establish and administer principles for responsible AI, provide guidelines and standards, or take over the coordination of authorities in various AI sectors.
United States of America
On 4 October 2022, the White House Office of Science and Technology Policy published a Blueprint for the Development, Use and Deployment of Automated Systems (Blueprint for an AI Bill of Rights). The Blueprint is – unlike EU's draft AI Act – non-binding and lists five principles that are intended to minimise potential harm from AI systems. On 18 August 2022, the National Institute of Standards and Technology (NIST) published the second draft of its AI Risk Management Framework for comments. The original version dates back to March 2022 and is based on a concept paper from December 2021. The AI Risk Management Framework is intended to help companies that develop or deploy AI systems to assess and manage risks associated with these technologies. It consists of voluntary guidelines and recommendations, so it is also non-binding and explicitly not to be understood as a regulation. More detailed information on this can be found in our article AI regulation in the USA – a look across the Atlantic.
Switzerland
Unlike in the EU, there is to be no separate law for the regulation of AI in Switzerland, but rather existing laws will be applied and, if necessary, adapted selectively. This includes, for example, supplementing data protection law with regulations on the transparency of AI systems, adapting the General Equal Treatment Act, existing competition law, product liability law and general civil law in order to create the necessary regulation for the use of AI systems. Further details can be found in our article AI regulation – will Switzerland be following the EU's lead?.
No global approach
As this snapshot shows, countries are in various stages of evolving their approach to regulating AI and take differing views of how best to do it. As some AI developers and tech business leaders call for a six month moratorium on developing Large Language Model AI to allow regulation to catch up, it seems unlikely that consensus will be reached on how best to regulate AI in a way which allows innovation while curtailing potentially harmful uses of AI.