14. Juni 2022
Digital Health – 4 von 8 Insights
AI is difficult to define. One reason for this is that we do not yet have a solid concept of what intelligence is, let alone what procedures we want to call intelligent. Even experts have differing views on what exactly AI is, given that 'AI' is often used as an umbrella term for a multitude of technologies, each with diverse characteristics and operating modes. Currently, machine learning is the engineering field which underlies recent progress in AI.
This uncertainty leaves companies in the field to ask a basic question: is the AI they own actually AI? This question will become increasingly relevant in the legal field as laws catch -up with technology and provisions are made specifically relating to AI. Areas of law in which we anticipate this occurring are: patent law, in particular around inventorship where the "invention" is by an AI machine; regulation of medical devices, where the black-box nature of AI can be seen as problematic by regulators who want to both understand the technology they are assessing, but for whom reproducibility is key to ensuring consistency in safety and efficacy; and data privacy, in particular where consent has to be given in a manner that is specific – what if the AI develops such as the consent is no longer broad enough to cover the use made of the personal (often sensitive) data?
The upsurge in AI development has led both the EU and the UK to develop AI strategies, including in relation to developing legal structures for regulating its deployment.
Understanding what AI is will be the first step to determining whether these anticipated legal developments will be applicable. This article discusses definitions, but we anticipate that governments will eventually draft their own definitions when writing the new legislation governing AI deployment.
To give a flavour of the range of definitions of AI available to choose from, a few are listed below:
"It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable” – John McCarthy
“Artificial intelligence is an entity (or collective set of cooperative entities), able to receive inputs from the environment, interpret and learn from such inputs, and exhibit related and flexible behaviours and actions that help the entity achieve a particular goal or objective over a period of time.” – EMERJ (AI Research and Advisory Company)
"The ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings." – (brittannica.com)
Conceptually, these definitions might aid a company in determining whether their software is AI. But they are open to interpretation and therefore do not provide certainty.
Companies looking for a certain definition of AI might want to turn to legal definitions. A clear definition of AI is important for policy makers, who must have a definition of AI if they want to regulate it. For example, if someone is knocked down by a driverless car, it should be clear who is liable. Businesses need to know who owns the IP if their AI designs products.
However, legal definitions differ from more general definitions. Legal definitions are working definitions: courts must be able to determine precisely whether a system is considered AI by the law. In addition, as technology is constantly evolving, legal definitions should also capture future changes in the AI field.
There have been several historical attempts to pin down a legal definition of AI.
In 2018, UK Parliament defined AI as 'Technologies with the ability to perform tasks that would otherwise require human intelligence, such as visual perception, speech recognition, and language translation.' In the UK Government's National AI Strategy published on 22 September 2021, it was stated that ‘Artificial Intelligence’ as a term can mean a lot of things, and the government recognises that no single definition is going to be suitable for every scenario. In general, the following definition is sufficient for our purposes: “Machines that perform tasks normally performed by human intelligence, especially when the machines learn from data how to do those tasks.”
In the US, the Future of AI Act – which was intended to set up the Federal Advisory Committee on AI, but which was never enacted – defined AI as ‘Any artificial system that performs tasks under varying and unpredictable circumstances, without significant human oversight, or that can learn from their experience and improve their performance… In general, the more human-like the system within the context of its tasks, the more it can be said to use artificial intelligence.’
These definitions define AI in relation to human intelligence, which is an issue because human intelligence itself is difficult to define. Michael I. Jordan, a leading researcher in AI and machine learning at the University of California, Berkeley has noted that the imitation of human thinking is not the sole goal of machine learning, which can instead serve to enhance human intelligence. He says: “People are getting confused about the meaning of AI in discussions of technology trends—that there is some kind of intelligent thought in computers that is responsible for the progress and which is competing with humans. We don't have that, but people are talking as if we do."
In 2019, the UK Office for AI released a different definition for AI: "the use of digital technology to create systems capable of performing tasks commonly thought to require intelligence."
This definition poses an issue of what would happen in future, when tasks are no longer "commonly thought" to require intelligence.
Note that the UK government has also set out a legal definition of AI in the National Security and Investment Act:
"A qualifying entity carrying on activities for the purposes set out in paragraph (2), which include— (a) research into artificial intelligence; or (b) developing or producing goods, software or technology that use artificial intelligence. 2. The purposes are— (a) the identification or tracking of objects, people or events; (b) advanced robotics; (c) cyber security.
“artificial intelligence” means technology enabling the programming or training of a device or software to— (i) perceive environments through the use of data; (ii) interpret data using automated processing designed to approximate cognitive abilities; (iii) make recommendations, predictions or decisions; with a view to achieving a specific objective."
This definition of AI is narrowed to focus on three higher risk applications. This makes sense when considering the purpose of the definition: to ensure clarity for companies determining whether mandatory notification under the act is required. However, this also means this definition will fail to capture all AI and is therefore limited in its applicability to other sectors where its use does not pose a risk to national security.
In early 2021, the EU Commission proposed the first ever regulatory framework for AI, which we discussed in our article here. The draft AI Regulation includes a proposed set of rules meant to provide clear requirements and obligations regarding specific uses of AI. The EU Commission attempted to take into account the fast-evolving nature of AI, promising the act would provide "a single future-proof definition of AI."
The draft Regulation defines AI as "software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with."
Annex 1 lists machine learning approaches, logic and knowledge-based approaches and statistical approaches. The EU proposal states that by referring to Annex I, the Commission will be able to adapt the Annex to align with new technological developments as the market develops. Unlike the UK and US definitions discussed above, this definition side-steps a reference to "human intelligence" and attempts to cover future events.
However, this definition is very broad: the question arises as to whether a general law such as the one proposed by the regulation will be so wide that it becomes ineffective. It remains to be seen how effective this definition will be in capturing all AI, especially in relation to future technologies not yet invented and therefore not yet envisioned in the list covered. The contents of Annex 1 will likely be scrutinised during the consultation phase.
The rationale for having a legal definition has been discussed above but coming up with a catch-all definition for AI has been demonstrated to be difficult.
A better approach might be to look at the context surrounding AI in which the law might need to intervene. As Turner (2019) said, policy makers should not ask ‘what is AI?’, but ‘why do we need to define AI at all?’ According to Casey and Lemley (2019): ‘ We don’t need rules that decide whether a car with certain autonomous features is or is not a robot. What we actually need are rules that regulate unsafe driving behaviour.' There is a concern over whether regulation of AI should be as broad as the definition put forward by the draft EU regulation, or narrower in scope, considering sectors on a case-by-case basis.
This is because the implications of AI in, for example, the finance sector will be vastly different to those in the healthcare sector, where there might be life and death consequences if it is not regulated properly – going back to the driverless car example, liability for AI medical devices must be clear. The draft Regulation touches upon this issue, as a medical device that incorporates AI is considered to be a high-risk system.
A strong regulatory framework which considers the special characteristics of AI is essential to ensuring the safety and security of a medical device that incorporates AI. Medical devices are currently covered by the UK Medical Devices Regulations 2002 and in the EU the Medical Devices Regulations 2017/745, and it is important to consider enforcement under the existing framework as well, as there are some risks which are inherent to all medical devices. However, medical devices with AI are a special type of medical device, with unique risks that must be considered. For example, as considered in the MHRA Medical Device consultation document, future regulation might require that AI medical devices used for diagnostics should be monitored for scientific validity to ensure the actual output they are providing correlates to what they would be expected to provide. This is monitoring that other medical devices might not need.
Therefore, the two regimes of AI regulation and medical device regulation must work together to ensure that regulating medical devices with AI and other medical devices does not lead to such a divergence that enforcement becomes unclear, or gaps in the regime form -, or else duplication in enforcement - occurs. These risks are not currently considered under the broad EU definition of AI. While the intention of this definition is to finally define AI in a way that is accepted globally, industry specific guidance for the implementation of the Regulation will be needed to provide a clearer position on how this definition will specifically affect the regulation of medical devices incorporating AI.
Companies developing AI applications in healthcare, whether medical devices or other uses, will need to keep abreast of the regulatory regimes which are shifting and are likely to set down concrete definitions into which newly developed products might or might not fit.
16. June 2022
14. June 2022
von Alison Dennis
9. June 2022
1. June 2022
25. May 2022