The European Union’s (“EU”) various initiatives to shape the digital future are entering their decisive year: the institutions involved are now putting their finishing touches to all the draft legislation on the regulation of artificial intelligence (“AI”) and data.
On 21 April 2021, the European Commission (“EU Commission”) published the “draft regulation laying down harmonised rules on artificial intelligence” (“AI Act”), which aims to increase society’s trust in AI systems without blocking the opportunities of this technology. The Regulation sets out harmonised rules for the development, placing on the market and use of AI systems in the European Union.
Taking a risk-based approach, AI systems are divided into four risk categories: unacceptable, high, low and minimal. High-risk systems in particular are subject to extensive regulation. AI systems with unacceptable risk, such as systems that manipulate human behaviour or assess the trustworthiness of people based on their social behaviour (so-called social scoring), are banned. High-risk AI systems, such as those that make decisions about people in areas sensitive to fundamental rights, must meet strict requirements for their use. Low and minimal-risk AI, such as chatbots or spam filters, will remain largely unregulated, so that competitiveness is maintained in the EU.
Since the first draft from the EU Commission, a lot has happened: The Council of the European Union (“EU Council”) adopted its common position on 6 December 2022 and accepted the compromise proposal prepared under the Czech Presidency. Among other things, the proposed amendments provide for a clarification of the definition of an AI system to ensure sufficiently clear criteria for distinguishing AI from simpler software systems. In Article 5, the scope of prohibited AI practices is expanded: the ban on using AI for social evaluation is now also directed at private entities. In addition, AI systems that are unlikely to cause serious violations of fundamental rights or other significant risks are no longer to be classified as high-risk systems. For SMEs, the upper limit of fines has been reduced to half. The EU Council has also revised the conformity assessment procedure and provisions on market surveillance to make them more effective and easier to implement.
In the European Parliament (“EU Parliament”), efforts are underway to find a common approach in response to diverging reports and opinions amongst the committees involved. The points of contention include some interesting issues and it is eagerly awaited to see just how these will be resolved. For example, it is proposed to exclude from the scope of the AI Act authorities in third countries that use AI in the context of international or judicial cooperation agreements and which are subject to an adequacy decision under the General Data Protection Regulation or an agreement on fundamental rights. The regulation of biometric recognition systems is also highly controversial. Some critics are calling for a comprehensive ban, which should also apply to the subsequent identification of persons as well as to the private sphere.
A vote in a plenary session on the joint IMCO and LIBE report from April 2022 is planned for the first quarter of 2023, but will probably not take place until the sittings in mid-March 2023 (see page 2). Once this has been done, the institutions involved will commence three-way talks to reach agreement. Against the background of the EU Parliament elections taking place in 2024, they will certainly try to conclude the procedure in the course of autumn 2023 at the latest, so that the AI Act can still enter into force during the current legislative period.
In addition to the AI Act, the planned EU Data Act is also of great importance for the use of AI systems. Among other things, it regulates the exchange of data between parties as well as the provision of data and contains provisions on unfair contract terms, interoperability and the switching of data processing services. Through the new rules, the EU legislator wants to contribute significantly to realising the full economic potential of data on the European market. In this area too, however, all kinds of disagreements remain.
Sweden took over the presidency of the EU Council on 1 January 2023 and is trying to reach compromises on the adoption of a common position. Differences of opinion on the EU Commission’s proposal published on 23 February 2022 on harmonised rules for fair data access and use seem to concern, for example, the exemptions for small and medium-sized enterprises (so-called SMEs) or whether the order of nullity of unilaterally imposed unfair contract terms should be independent of the size of the company concerned. The demarcation from the Trade Secrets Directive is also disputed; some Member States are arguing for a reduction of disclosure obligations to protect trade secrets affected by it.
The recent proposal for a “directive on adapting non-contractual civil liability rules to artificial intelligence” dated 28 September 2022 and the related proposal for a revision of the 1985 Product Liability Directive aim to make manufacturers of digital systems more accountable. The background to this is, among other things, that the control of users over corresponding products is declining. According to the proposal, two rules are to be introduced into the laws of the Member States on non-contractual fault-based civil liability , according to which the breach of the duty of care and its causality for the damage are presumed. The actual basis for liability, on the other hand, is derived from national law.
Voting on this has begun in the EU Council and the EU Parliament. For example, the Working Party on Civil Law Matters of the EU Council dealt with the proposal for a directive at its meeting on 12 January 2023.
Stay tuned is therefore indispensable for 2023, because this is the year when key decisions will be made on the legal foundations of the digital future.
1 of 7 Insights
2 of 7 Insights
3 of 7 Insights
5 of 7 Insights
6 of 7 Insights
7 of 7 Insights