作者

Fritz-Ulli Pieper, LL.M.

授薪合伙人

Read More

Dr. Benedikt Kohn, CIPP/E

高级律师

Read More
作者

Fritz-Ulli Pieper, LL.M.

授薪合伙人

Read More

Dr. Benedikt Kohn, CIPP/E

高级律师

Read More

2023年2月22日

AI regulation in the USA – a look across the Atlantic

  • Briefing

Co-authors: Dean W. Harvey, Partner, Perkins Coie LLP and Pranav Neel Bethala, associate, Perkins Coie LLP


In the European Union (“EU”), the legislative process on the Artificial Intelligence Act (“AI Act”) is steadily moving forward following the Council's position published on December 6, 2022. However, even though the upcoming Trilogue is eagerly awaited, it is no longer expected this year due to the difficulties of the European Parliament to agree on a position. This seems like a good time to take a break and look across the Atlantic at what the United States of America ("USA") is up to in terms of AI regulation. For this purpose, we would like to take a closer look at three recent developments in the USA’s regulatory landscape.


CPRA - California Privacy Rights Act

In November 2020, voters in the USA’s state of California approved of a new law known as the California Privacy Rights Act (the “CPRA”). Effective as of January 1, 2023, the CPRA significantly amends and expands an existing consumer-privacy law known as the California Consumer Privacy Act (the “CCPA”). Included among the changes enacted by the CPRA are the following:

  • The creation of a new California government agency to enforce the CPRA;
  • The expansion of an existing opt-out right in order to allow consumers to opt out of having their personal information shared for the purpose of “cross-context behavioral advertising”, a practice in which consumers are targeted for advertising based on personal information that they provide outside of the context that they intentionally interact with; and
  • Excluding “dark patterns” – which are user interfaces designed or manipulated with the substantial effect of undermining user autonomy, decision making, or choice – from constituting “consent” by consumers regarding their personal information and requiring regulations that prohibit dark patterns from being used by businesses.

Although the CPRA is a California law, many important commercial transactions and relationships involve California in some fashion and may therefore be subject to the CPRA. Moreover, much like what has occurred with the CCPA, laws similar to the CPRA are likely to be adopted by several other states, thereby making the CPRA nationally significant even though it is does not cover the entire USA.


AI Bill of Rights

On October 4, 2022, the White House Office of Science and Technology Policy (the “OSTP”) published a Blueprint for an AI Bill of Rights (the “Blueprint”). The origins of the Blueprint began almost a year earlier on October 22, 2021, when the OSTP issued a press release acknowledging potential and actual dangers posed by AI systems and proposing that, much like the Bill of Rights enacted during the American Founding, a new bill of rights for citizens was necessary with respect to AI. The announcement included a public request for information about AI-enabled technologies from public- and private-sector researchers, policymakers, stakeholders, technologists, journalists, and advocates. In a subsequent press release on November 10, 2021, the OSTP announced that it would also be hosting listening sessions and public events bringing together various practitioners, advocates, and government officials to promote education and engagement on areas where AI-enabled technologies affect the lives of citizens. The Blueprint is the culmination of those efforts and represents the current White House’s approach toward AI.

Unlike the EU's planned AI Act, the Blueprint is non-binding, but it lists and – provides practical guidance for implementing – five principles that are intended to minimize potential harm from AI systems:

  • Safe and effective systems

AI systems should be developed with public and expert consultation to identify potential risks. They should be tested prior to deployment and monitored on an ongoing basis to demonstrate that they are safe and effective. AI systems should not be developed with the intent or foreseeable possibility of compromising safety. They should be designed to pro-actively protect against harm that could result from unintended consequences. The use of inappropriate, low-quality, or irrelevant data should be avoided. AI systems should be subject to independent assessments and reports. 

  • Protection against algorithmic discrimination

AI systems should be developed and used in an equitable manner and not discriminate on the basis of a legally protected characteristic. AI system developers and operators should take proactive and ongoing steps to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable manner. Systems should be subject to proactive equity and inequity assessments and be developed based on a representative and robust data set. They should ensure accessibility for people with disabilities and prevent the use of unrepresentative data that contributes to discrimination. There should be an independent assessment of potential algorithmic discrimination and reporting that is as public as possible.

  • Privacy

Individuals should be able to determine how their data is used and should not be monitored. To this end, AI systems should process data in accordance with data protection principles (e.g., data minimization, consent to processing, deletion of data). Systems should not use AI to make design decisions that obfuscate user choice or burden users with default settings that intrude on privacy. Surveillance and monitoring systems should be subject to enhanced oversight, including an assessment of potential harms, and should not be used in areas such as housing, education, or employment, or where surveillance would monitor the exercise of democratic rights in a way that restricts civil rights and liberties.

  • Notices and explanations 

Designers, developers, and operators of automated systems should provide generally accessible, easily understood documentation. This should include clear descriptions of the general system functionality and the role of automation, a reference to the use of such systems, the person or organization responsible for the system, and clear, timely, and accessible explanations of the results. Individuals should know how and why a result affecting them was determined by an automated system. Automated systems should provide meaningful explanations appropriate to the risk.

  • Human alternatives, testing and fallback

There shall be the option to opt out of AI systems and opt for a human alternative. There should be access to timely human review and remediation through a fallback and escalation process. AI systems to be used in sensitive areas (e.g., criminal justice, labor, education, and health) should additionally be tailored to the purpose, provide meaningful access for monitoring, include training for all individuals interacting with the system, and include human considerations for adverse or risky decisions.

Unsurprisingly, reactions to the document have been mixed. Some criticize the fact that it is only a non-binding white paper and not a legal regulation, and that it therefore offers no opportunities to actually enforce the principles described. Others criticize the draft for denigrating digital technologies as "one of the great challenges to democracy" and worry about the impact of possible new regulations on the competitiveness industry - a position that was essentially also put forward against the first draft of the AI Act in April 2021. Some even hope that the EU and the U.S. could create a uniform set of rules for AI regulation – in view of the completely different approaches taken by the two regulators to date, however, this will probably remain a pipe dream for the time being.

Time may prove that some of these criticisms are not that problematic. For example, the criticism that the Blueprint is non-binding neglects the fact that the OSTP’s final product was never anticipated to be binding. Additionally, the OSTP’s Blueprint provides cases where government agencies have implemented its principles. For example, the Department of Energy, the Department of Defense, and the United States Intelligence Community have created frameworks for ethical AI development and use, and the Equal Employment Opportunity Commission and the Department of Justice have issued practices for avoiding discrimination in hiring or against employees with disabilities. By including such cases, the Blueprint sets forth examples that other federal agencies can follow in creating more binding regulations and guidelines.

However, apart from the criticisms noted above, there may be a deeper problem in the Blueprint’s approach: it appears to simultaneously over- and underregulate in important areas. Regarding potential over-regulation, the Blueprint states that it is intended to cover any automated systems “that have the potential to meaningfully impact individuals’ or communities’ rights, opportunities, or access”. The Blueprint understands this scope quite broadly, even extending it, as the Blueprint’s appendix notes, to AI uses like “algorithms that purport to detect student cheating or plagiarism” and “automated traffic control systems”. Although such uses may meaningfully impact individual or community well-being, such impacts are likely to be more attenuated and less likely overall, especially when compared to the other and more serious impacts noted in the Blueprint. Uncritically applying the Blueprint’s principles to these types of AI uses not only may divert resources from addressing potentially high-risk and impactful automated systems but also may be counterproductive (e.g., human alternatives to AI-based plagiarism detection are far less effective).

Regarding potential under-regulation, it has been observed that the Blueprint lacks much regulation for – or even discussion of – the extensive use of AI by federal law enforcement agencies. The Blueprint expressly states that law enforcement activities “require a balancing of equities, for example, between the protection of sensitive law enforcement information and the principle of notice” and that “as such, notice may not be appropriate or need to be adjusted”. Citizens may reasonably be concerned about this approach, especially when it is contrasted with the Blueprint’s much more demanding and meticulous approach for other AI uses, including activities with lower risk.


AI Risk Management Framework

On August 18, 2022, the National Institute of Standards and Technology ("NIST") released the second draft of its "AI Risk Management Framework" for comments. The original version dates back to March 2022 and is based on a concept paper from December 2021, the final version is announced for January 2023. The AI Risk Management Framework is intended to help companies that develop or deploy AI systems assess and manage risks associated with these technologies. It consists of voluntary guidelines and recommendations, so it is also non-binding and explicitly not to be understood as a regulation.

The AI Risk Management Framework consists of four core functions, each of which is subdivided into subcategories, which in turn are assigned actors and activities.

  • Map”: The context is recognized and the risks associated with the context are identified.
  • Measure”: Identified risks are assessed, analyzed or monitored.
  • Manage”: Risks are prioritized and managed based on likely impact. 
  • Govern”: A culture of risk management is maintained and present.

Users of the framework can apply these capabilities in the way that best suits their AI risk management needs.

The AI Risk Management Framework has been praised as a working framework that organizations can actually use and adapt according to their particular circumstances. Nevertheless, some sources of risk are not or not sufficiently addressed in the framework, such as poor data quality or unpredictable interactions between AI and other systems. Additionally, some comments to the second draft criticize the framework for not adequately addressing the human components of AI risk management. For example, several different entities and persons have expressed concerns that the framework’s “human in the loop” concept fails to distinguish the different kinds of human oversight appropriate for automated systems, that the framework does not sufficiently discuss the importance of feedback from end users, or that the framework could be improved by stating and identifying design principles to improve dialogue and collaboration between interested parties, including human end users. It remains to be seen whether these concerns are more fully addressed in the final version of the framework. 


Different approaches – same goal

There is no doubt that the potential risks of AI have been recognized – on both sides of the Atlantic. However, as in the case of the General Data Protection Regulation, the EU seems to be bringing the bigger regulatory stick, while the USA is (for now) relying more on voluntary action. Which approach will be more successful when it comes to the goal of using AI on the basis of western values without blocking the undoubtedly great opportunities of the technology against the background of international competition remains to be seen. In any case, we still need to wait and see what the final version of the AI Act will look like and whether the USA will catch up in terms of regulation. Neither is likely to be known before the end of 2023, but one thing is certain – we will keep an eye on it.

Co-authors

Dean W. Harvey

Dean W. Harvey is a partner at Perkins Coie LLP and co-chair of its Artificial Intelligence, Machine Learning & Robotics practice. Dean has counseled clients from Fortune 100 entities to start-ups on artificial intelligence (AI) and machine learning, privacy and security, and AI companies in nego-tiations with their clients. Additionally, Dean has more than a decade of software industry experi-ence in AI and numerous IT systems prior to his practice of law that brings technical proficiency to his legal practice. Dean is recognized by Chambers USA, The US Legal 500 and The Best Law-yers in America in the areas of technology law and technology outsourcing.


Pranav Neel Bethala 

Pranav Neel Bethala is an associate at Perkins Coie LLP who works with clients in a variety of in-dustries, including artificial intelligence (AI). Pranav has advised and assisted clients on compli-ance and policy issues concerning AI, including drafting comments and responses on behalf of a household-name client for the White House’s AI Bill of Rights to educate and protect American citizens with respect to AI.


Artificial Intelligence Act

Read frequently asked questions, commentary and the latest updates concerning the EU's AI Act. 

Learn more
Learn more
Call To Action Arrow Image

Latest insights in your inbox

Subscribe to newsletters on topics relevant to you.

Subscribe
Subscribe

Related Insights

AI regulation – will Switzerland be following the EU's lead?

2021年12月27日
Briefing

作者 Dr. Benedikt Kohn, CIPP/E

点击此处了解更多