17 octobre 2022
Entrepreneurs have been asking more and more questions about the specifications ever since the initial talks of a regulation on the harmonisation of artificial intelligence law began to circulate: Are my products affected? Which regulations must be followed? Who might be liable in case of defects or damages? We have briefly summarised the current situation in an effort to address the most frequently asked questions and provide an outlook on what is yet to come or can be expected.
The European Commission's proposal for the regulation on artificial intelligence (Artificial Intelligence Act – “Regulation”) opens the door to a broad range of applications. The Artificial Intelligence Act applies to all providers of AI systems that place them on the market or operate them in the European Union as well as providers that use the results of AI systems in the European Union and respective users of AI systems. AI systems in the context of the Regulation are intended to include the following modes of operation (but are not limited to these):
The Regulation classifies AI systems into risk categories, with regulatory approaches varying in rigor depending on the category. AI systems that pose an unacceptable risk, for example, will be outright banned. These include systems that can harm people through subliminal influence, as well as those that actively classify people and treat them differently according to their personality or social behaviour (e.g., social scoring). High-risk systems may only be used under strict compliance requirements. This applies, for example, to systems used for remote biometric identification, securing critical infrastructure, decision-making in human resources management, creditworthiness evaluation, and risk assessment in criminal prosecution. Systems that pose little or minimal risk, such as chatbots, AI-assisted face generation, or AI in video games, must only meet certain transparency requirements.
The aim of the European Commission's new proposal for a directive on AI liability is to shift difficult evidentiary problems regarding the culpability of AI systems from the consumer and towards the AI provider or operator.
Hence, there are plans to oblige AI system operators to disclose relevant evidence upon a plaintiff’s request in the event that a high-risk system is suspected of causing damage. If such request is not complied with after a court order, the burden of proof is reversed, and it will then be refutably presumed that the operator breached their duty of care.
A reversal of the burden of proof with regard to liability also applies if, for example, the relevant security guarantees were not adhered to during the development and operation of the AI system, or in case the system was trained with non-qualitative data sets.
These regulations make evidence gathering easier for consumers by allowing them to request disclosure of relevant records to a court, which in turn can order the operator to disclose them. Nevertheless, appropriate safeguards for the protection of sensitive information and trade secrets should also be provided.
Such a far-reaching regulation will naturally be scrutinised and criticised from a wide variety of angles.
One major issue concerns the legal certainty: some stakeholders believe that the definition of AI systems is too narrow and does not sufficiently address potential violations of human rights, while others view overly broad regulation as an inhibiting factor in technical and economic growth opportunities. Therefore, it has become necessary to more clearly define AI systems for the purposes of the new regulation.
The EU's approach of choosing the technical manner of functioning of AI systems as the common dominator, has also drawn criticism: the fact that the technical functioning is essentially always determined by human selection and thus biases are always man-made, is not sufficiently accounted for in the drafts.
The new liability regulation also raises other questions: for example, what other means of proof can be used if an operator cannot disclose the reasoning of AI decision logic due to technical reasons?
The Austrian federal government is pursuing three strategic goals in its Artificial Intelligence Mission Austria 2030: the use of AI oriented toward the common good, the development of Austria as a research location, and securing Austria’s status as a competitive business location.
From a legal perspective, the government plans to:
However, there are no concrete legislative proposals yet as the federal government must also wait for the finalisation of the EU regulations before it will take further steps.
It will most likely take some more time before the entire set of AI law regulations comes into force and the directives are transposed into national law.
The next step is to weigh up and harmonise the varied viewpoints of different stakeholders. The positioning between economic efficiency, social responsibility, consumer protection and technological progress constitutes a tremendous balancing act with far-reaching implications for the EU and its AI innovations of the future. Much further thought is needed, particularly in terms of concretising the legal requirements.
Written with the support of Alexander Lakatha