2024年11月4日
Our quarterly AI newsletter provides analysis of key recent industry, legal and sector developments in AI with a focus on commercial technology, digital and data in the EU and UK.
Register to receive AI news updates directly in your inbox.
Debbie Heywood looks at UK policy announcements and rumours around AI legislation since the July 2024 general election.
New AI legislation was announced by the government in the King's Speech of 17 July 2024. The aim of the legislation is to "seek to establish the most appropriate legislation to place requirements on those working to develop the most powerful AI models”. Curiously, there was no elaboration on what the legislation might cover in the background briefing notes to the speech although it seems clear that any proposed legislation would be far less comprehensive than the EU's AI Act and there is widespread agreement that it will focus on safety of frontier systems.
At the Labour Party conference in September 2024, AI Minister Feryal Clark hinted legislation could well go further, saying she was "in the process of bringing forward legislation" intended to clarify the use of copyrighted materials to train AI, suggesting a consultation would take place as early as October. She has since clarified her remarks saying instead that the government is conducting a series of round tables with stakeholders to try and resolve copyright disputes between British AI companies and creatives. Speaking at The Times Tech Summit, Clark suggested an agreement could come by the end of the year and that it might take the form of an amendment to existing laws or entirely new legislation. Transparency and the right to opt out of having copyrighted materials used to train AI models are expected to be a focus of the discussions but there has also been talk of introducing an extended TDM (text and data mining) exemption similar to the one in the EU Copyright Directive – an initiative previously rejected by the then UK government in 2023 – to cover TDM for commercial purposes under certain circumstances (see here for more on this issue). Another area in which there are mixed messages is whether or not the AI Office will be put on a statutory footing.
Whatever the AI legislation contains, it will be a departure from the previous government's policy as stated in its White Paper on AI, published in August 2023, which concluded there was no need for AI-specific legislation. Just before the 2024 general election, there were, however, rumours that the Conservative government was working on AI legislation which was widely expected to make mandatory the currently voluntary commitments by leading developers of large language models/general purpose AI to submit algorithms to a safety assessment process. There were also suggestions then that copyright legislation would be amended to allow organisations and individuals to opt out of allowing LLMs to scrape their content.
It initially seemed likely that any planned legislation would not cover the public sector which may explain why Lord Clement-Jones introduced a Private Members' Bill on AI to the House of Lords on 9 September. It relates to mitigating the risks of AI use by public authorities with a focus on potential bias and automated decision making. It would require public authorities to take certain protective measures including around impact assessments, transparency, log maintenance and retention, and explainability. It would also provide for the setting up of an independent dispute resolution mechanism for allegedly unfair or disputed automated decisions. The Ada Lovelace Institute said in September that local authorities are struggling to navigate the 16 pieces of legislation and guidance which cover the use of AI in public procurement so they might indeed welcome legislation in this space and lately, there have been suggestions that public sector could be in scope of the upcoming legislative proposal.
On 15 October 2024, the UK government published a Green Paper, Invest 2035 – a Modern Industrial Strategy for consultation. As you might expect, AI is mentioned several times, mostly as an opportunity for strengthening the UK's position in sectors such as life sciences, digital and technologies, data-driven businesses and defence. The Strategy also refers to the AI Opportunities Action Plan led by Matt Clifford and launched in July 2024, which will propose an "ambitious plan to grow the AI sector and drive responsible adoption across the economy". The government is widely expected to publish its AI Plan in November, potentially alongside a consultation on new legislation.
ECIJA's Carlos Rivadulla Oliva looks at the EU's progress on regulating AI and at how to prepare for compliance, covering the AI Act, the AI Pact and the AI Liability Directive.
With the conclusion of the EU's AI Act which came into force on 1 August 2024, the EU is at the forefront of regulating artificial intelligence. Businesses operating in the EU must brace themselves for the gradual implementation of all the requirements and obligations under the AI Act which will apply to a greater or lesser degree to all operators in the AI value chain.
Central to the preparation process is the EU AI Pact, also announced on 1 August 2024. This is a non-legislative, voluntary commitment by companies to comply with the principles and future obligations laid out in the AI Act ahead of provisions becoming applicable. This Pact serves as both a soft-landing for businesses to test compliance and as a political move to engage stakeholders early.
The EU AI Pact is significant because it allows businesses to get ahead of the compliance curve. It emphasises collaboration between public and private sectors to address the risks posed by AI technologies. Signatories commit to the ethical use of AI, focusing on ensuring that AI systems are lawful, transparent, and accountable, reflecting the risk-based approach of the AI Act. Although voluntary, participating in the AI Pact sends a strong message of corporate responsibility and readiness for the incoming obligations under the AI Act. On 25 September 2024, the European Commission announced that over 100 companies had signed up including Amazon, Google and Microsoft.
AI transparency as a key compliance priority
Among the many obligations that companies will face under the AI Act, one stands out as particularly critical: AI transparency. The AI Act divides AI systems into categories based on their risk profiles, with “high-risk” systems subject to the strictest requirements. One of these is the demand for transparency, which means that operators of high-risk AI systems must provide clear information about how their systems function and make decisions.
Transparency is essential for building trust in AI systems and ensuring accountability. The transparency requirements under the AI Act are multifaceted. First, users must be informed when they are interacting with an AI system rather than a human, especially in cases involving automated decision-making. Second, companies must be able to explain, in layperson’s terms, how the AI system operates, particularly how it processes data and arrives at specific outcomes.
The complexity of many AI systems poses a challenge, particularly in the context of advanced machine learning models like neural networks. Organisations must prioritise not only understanding the technical workings of their AI but also translating these mechanisms into clear and comprehensible terms for regulators, users, and stakeholders. Compliance with transparency requirements will also likely involve documentation and regular audits of AI systems to ensure they are functioning as intended and are aligned with the principles of fairness, accountability, and non-discrimination.
Update on progress of the AI Liability Directive
In tandem with the AI Act, the AI Liability Directive (AILD) is intended to play a crucial role in harmonising the legal landscape for AI across the EU. The AILD is designed to establish clear rules regarding liability for damage caused by AI systems. It focuses on facilitating claims for those harmed by AI, making it easier to prove causality and liability in cases involving complex AI systems.
The European Parliament and EU Council agreed the text of the AILD in December 2023, nearly a year ago, but there are suggestions that progress has stalled and the current version may yet be significantly amended or withdrawn altogether.
The European Parliament's JURI committee is expected to make a decision shortly as to whether or not to proceed with the Directive as it stands following an impact assessment by the European Parliamentary Research Committee, published in September 2024, which called for changes amidst concerns that the AILD overlapped too much with the AI Act and the recently agreed revised Product Liability Directive. The Research Committee's recommendations include that this legislation should be a Regulation rather than a Directive, that the focus be more on software liability with the scope extended to non-AI software in order to align with the revised Product Liability Directive, and that there should be extensions to certain areas of liability and damages claims.
As the legislative landscape continues to evolve, organisations must stay agile and informed, actively preparing for both the AI Act and, potentially the AI Liability Directive, to mitigate risks and capitalise on the benefits of compliant AI innovation.
Benjamin Znaty looks at what's really behind the current trend of delaying AI product releases in the EU.
Several major tech companies have recently postponed the release of new AI features and services in the EU. In almost all cases, the press has cited the legal challenges these companies face in ensuring compliance with the latest EU regulations before launching their AI innovations. But could there be more strategic reasons at play?
Apple’s decision to delay the release of its 'Apple Intelligence' AI features in France and across the EU was attributed to "regulatory uncertainties" stemming from the Digital Markets Act (DMA), in an article published by The Verge. These AI capabilities will be rolled out gradually worldwide, with EU countries being among the last to gain access. Apple reportedly had concerns about the DMA's interoperability requirements, which could force the company to open its ecosystem. While Apple is said to be working with the European Commission to ensure these features are introduced without compromising user safety, the actual link between delaying the launch of Apple Intelligence in Europe and addressing these concerns remains unclear.
This decision to delay the launch of AI capabilities in the EU is by no means unprecedented. In early October 2024, OpenAI introduced its highly anticipated 'ChatGPT Advanced Voice Mode' in the UK but chose not to release it in EU countries. Reports indicate that OpenAI attributed this decision to having to comply with EU regulations, specifically the EU AI Act. The press highlighted Article 5 of the EU AI Act, which prohibits the use of AI systems for inferring emotions, however, Article 5 only applies to the use of this type of AI within "areas of workplace and educational institutions," leaving the connection between Article 5 of the AI Act and this new ChatGPT feature somewhat ambiguous. Perhaps for this reason, in an October 22nd tweet, OpenAI did, finally announce its decision to rollout the feature across the EU.
The GDPR is also regularly cited as potential a stumbling block to AI development in the EU. In June 2024, Meta held its developer conference where it announced upgrades to its Llama AI product would not be possible for the time being in Europe. In a public statement, Meta explicitly stated that its delay was related to GDPR compliance issues, particularly in light of scrutiny from the Irish Data Protection Commission (DPC). According to Meta, requests made by the DPC hindered the training of its large language model, which relies on public content shared on Facebook and Instagram. While Meta has made the pause of its use of EU data to train its AI model permanent in the EU, it has resumed these processing activities in the UK, where the ICO continues to maintain a watching brief but has not so far required Meta to cease the processing.
This was not the first time Meta has run into regulator scrutiny over its use of AI. Three years ago, it announced it would cease using facial recognition technology for tagging purposes on Facebook in light of privacy concerns. On 21 October 2024, however, it said it was planning to start using facial recognition again to verify user identity, help recover hacked accounts and detect and block some types of scam ads. Interestingly, Meta said it would not be testing facial recognition for identity verification purposes in the EU, UK and in the US states of Texas and Illinois, jurisdictions in which Meta is continuing to have conversations there with regulators. Meta’s vice president for content policy is reported to have said that the “European regulatory environment can sometimes slow down the launch of safety and integrity tools like this. We want to make sure we get it right in those jurisdictions".
Whichever EU regulatory framework is cited in the above cases - the DMA for Apple, the AI Act for OpenAI, or the GDPR for Meta -the outcome is that EU consumers may experience short-term delays in accessing innovative AI technologies. Looking at the longer term prospects though, these regulatory frameworks arguably present an opportunity for tech businesses. While it's true that businesses may need to postpone releases of new AI technologies and features, as Meta has indicated, these organisations will be working to ensure that their products meet EU regulatory requirements while also preserving their commitment to user privacy and data security in a complex regulatory landscape. Creating customer trust will be fundamental to take up so taking the time to get it right may actually increase profitability which, in turn, will further fund innovation.
Whether or not the EU's approach to regulation leads to enhanced consumers protections at the expense of technological progress in Europe is yet to be determined, but it’s important to recognise the ongoing interaction between big tech corporate strategies and regulatory oversight when launching AI capabilities in Europe.
Gregor Schmid looks at the implications of the Hamburg Court's decision on the Text and Data Mining copyright exemption's applicability to training generative AI in the EU.
In a decision of 27 September 2024, the Hamburg Regional Court dismissed the lawsuit of a photographer against LAION, the provider of the LAION-5B image-text dataset. The main reasons for the decision are based on the copyright exception for Text and Data Mining (TDM) for purposes of scientific research, but the decision also addresses a number of other issues, such as the applicability of the TDM exceptions to the training of generative Artificial Intelligence, the requirements for declaring a reservation of rights according to the general TDM exception, and the conditions of “machine readability”. The decision has recently been appealed and the case will now be heard by the Hamburg Higher Regional Court.
The facts
LAION offers the LAION-5B image-text dataset, which can be used to train large image-text models, such as Stable Diffusion. The plaintiff (a stock photographer) claimed that LAION unlawfully downloaded a photograph created by him for the purposes of creating AI training datasets and demanded a cease and desist order against the allegedly unlawful download. The dataset contains hyperlinks to publicly accessible images or image files on the internet as well as further information about the corresponding images, including an image description that provides information about the content of the image in text form. The dataset comprises 5.85 billion corresponding image-text pairs. LAION extracted the URLs to the images from this data set and downloaded the images from their respective storage locations, then used software to check the images to see whether the description of the image content already in the existing data set actually matched the content to be seen in the image. The website from which the image was downloaded contained terms and conditions that prohibited among other things the use of automated programs to access the website or any content on it by way of downloading, indexing, scraping or caching any content on the website.
The decision
The Court rejected the plaintiff’s claims, as the use was covered by the Copyright exception for text and data mining "for the purposes of scientific research" (Article 3 of the DSM Copyright Directive as implemented in German law). This exception does not allow rightsholders to opt out. The intended use qualified as “text and data mining” as defined by the law (i.e. the automated analytical technique “aimed at analysing text and data in digital form in order to generate information which includes but is not limited to patterns, trends and correlations”). The Court did not see any evidence that LAION cooperated with a (commercial) third party undertaking having decisive influence on it, and having preferential access on the search results, which would have excluded the exception. The Court expressly only decided on the legality of the download, and not on the question of the (subsequent) training of generative AI, which was not part of the claim brought.
Although further reasoning was not strictly necessary, the Court, in an obiter dictum, also gave an initial assessment on the applicability and interpretation of the “general” TDM exception (Article 4 of the DSM Directive as implemented in German law). As such, the Court accepted that LAION’s use generally qualified as text and data mining. Moreover, the Court tended to the view that the TDM exception not only covered data analysis, but, with reference to Article 53(1)(c) AI Act, also the creation of datasets for the subsequent training of generative AI. However, there likely would have been a valid opt-out declared in the terms and conditions of the website that distributed the plaintiff’s photographs. Although the opt-out had not been made by way of a programmed exclusion protocol (such as robot.txt), but in 'natural' language, the Court tended to the view that such reservation was sufficiently explicit and specific. The opt-out could also be declared by a non-exclusive licensee of the rightsholder. In addition, such reservation also likely satisfied the requirements for “machine readability” for content made available online, as there were likely state-of-the-art technologies (as mentioned in Article 53(1)(c) AI Act) available to understand natural language reservations.
What does this mean for you?
The Court’s decision is the first judgement of an EU court addressing the interpretation of the TDM exception. Although the judgement is now subject to review by the appeal court, and although there is no rule of binding precedent in German law, the decision will very likely be taken into account by other courts in Germany and possibly beyond, as it addresses a number of controversial questions at the intersection of copyright and AI. The scientific community will likely welcome the judgement, as it sheds some light on the scope of the TDM exception for scientific purposes under the DSM Directive. It is also noteworthy that the Court saw the TDM exception as generally broad enough to include the training of generative AI. As regards the general TDM exception that also covers other commercial purposes, the discussion of what qualifies as an expressly stated and “machine readable” opt-out will stay high on the agenda.
Paolo Palmigiano looks at the evolving approach of competition authorities to the AI sector in light of recent developments.
The rapid evolution of foundation models and GenAI has recently become the focus of competition authorities.
Policy update
Most competition authorities, especially the UK’s Competition and Markets Authority (CMA) and the European Commission, are trying to get a better understanding of the competition issues that AI raises and how to address them
In September 2024, the EC released a policy brief addressing competition in GenAI and virtual worlds. And on 16 October it launched a tender for a study on how AI would impact the Digital Markets Act that regulates Big Tech. In April, the CMA published the outcome of its review of foundation models. And in July, the US authorities, the CMA and the EC published a joint statement on competition on GenAI.
Competition concerns
Most authorities agree on the competition concerns: foundation models require a lot of data, a lot of computing power, substantial investments and highly skilled specialists. Big Tech companies have an advantage in all those areas and can gain a significant advantage that could distort competition in AI.
Recent merger cases
Most mergers and acquisitions in AI do not fulfil the merger thresholds in the EU and UK and therefore do not get examined by the authorities. The UK, for example, is introducing a new merger control threshold in the new year that could capture some of these transactions (one party has £350m turnover in the UK and a 33% market share in any market, and the target has a link to the UK – even if no revenues). And the EU is considering possible changes to the Merger Regulation. A few years ago, it had reinterpreted a provision of the Regulation (article 22 EUMR) giving itself the power to review mergers below the EU thresholds, but such interpretation has been quashed by the European Court of Justice so the Commission now has to rethink. Some authorities are suggesting using the value of the transaction rather than turnover to capture these transactions under merger control rules, as Austria and Germany have done. But today, more than acquisitions, partnerships between Big Tech and small AI start-ups are becoming prevalent. Partnership agreements tend not to fulfil the criteria for merger review and we have a few recent examples, notably Microsoft/OpenAI, Microsoft/Mistral AI, Amazon/Anthropic.
Acqui-hires
People are a key asset in AI. We are starting to see large companies buying smaller companies for the people they employ or, in an effort to avoid merger filings, acquire just the people and enter into an agreement with the start-up. An example is Microsoft's announcement in March 2024 that it had hired several former Inflection AI employees, which amounted to almost all of Inflection AI’s team, including two of its co-founders. In addition, Microsoft also entered into a series of arrangements with Inflection AI including, among others, a non-exclusive licensing deal to utilise Inflection AI IP in a range of ways. The CMA, with its flexible merger test, took jurisdiction and reviewed it as a merger transaction but it was cleared as the CMA considered that it did not lead to a substantial lessening of competition. The EC tried to get jurisdiction but had to accept that the transaction did not fulfil the test under EU rules.
What next?
In the next few years, we will see competition authorities trying to deal with the competition issue AI raises as well as reconsider their powers in order to have the ability to review these transactions. The learning from the growth of the tech sector, where competition intervention was not immediate and arguably too late when it did happen is something the competition authorities are well aware of. They are keen to avoid an equivalent scenario when it comes to AI businesses.
Séverine Bouvy looks at the latest Belgian DPA guidance on data and AI which focuses on AI system development.
In September 2024, the Belgian Data Protection Authority (BDPA) published an information brochure on AI systems and the GDPR outlining the interplay between the GDPR and the AI Act in the context of AI system development (Guidance).
The Guidance first outlines the criteria to be met to qualify as an AI system under the AI Act:
In some cases, AI systems can also learn from data and adapt over time. Examples of AI systems in daily life include spam filters in emails, recommender systems on streaming services, virtual assistants, and AI-powered medical imaging tools.
The Guidance goes on to tackle the application of the GDPR and the AI Act requirements to AI systems, emphasising how these two pieces of legislation complement and reinforce each other:
Lawful, fair, and transparent processing
The six legal bases under the GDPR remain the same under the AI Act. In addition, the AI Act introduces a prohibition of specific types of high-risk AI systems such as social scoring and real-time facial recognition in public spaces. The GDPR fairness principle is also reinforced by the requirement to mitigate bias and discrimination in the development, deployment, and use of AI systems.
Transparency
The AI Act complements the GDPR by mandating user awareness when interacting with AI systems, and where high-risk AI systems are concerned, by requiring clear explanations of how data influences the AI decision-making process.
Purpose limitation and data minimisation
Under the GDPR, data must be collected for specific purposes and limited to what is necessary. The AI Act reinforces these principles, especially for high-risk AI systems, for which the intended purpose must be clearly defined and documented.
Data accuracy
The GDPR requires data accuracy, which the AI Act strengthens for high-risk AI systems by requiring the use of high-quality and unbiased data to prevent discriminatory outcomes.
Storage limitation
The GDPR limits data storage to what is necessary for the processing (subject to certain exceptions). The AI Act does not add any extra requirements in that respect.
Automated decision-making
The GDPR allows individuals to challenge solely automated decisions which have a legal or similarly significant effect on them, while the AI Act emphasises proactive meaningful human oversight for high-risk AI systems.
Security of processing
Both the GDPR and the AI Act mandate security measures for data processing. The AI Act highlights unique risks in AI systems, such as bias and manipulation, and requires additional security measures such as identifying and planning for potential problems, continuous monitoring and testing and human oversight throughout the development, deployment, and use of high-risk AI systems.
Data subject rights
The GDPR grants individuals rights over their personal data, such as access, rectification, and erasure. The AI Act enhances these rights by requiring clear explanations of how data is used in AI systems.
Accountability
Both the GDPR and the AI Act stress the importance of organisations demonstrating accountability. For AI systems, this includes risk management, clear documentation on the design and implementation of AI systems, human oversight for high-risk AI systems and incident reporting mechanisms.
Finally, the Guidance shows how to apply all these requirements to a specific use case, namely a car insurance premium calculation system.
János Kopasz looks at Hungary's approach to regulating AI Act compliance.
Hungary has taken a significant step in implementing the EU’s AI Act through Government Decree 1301/2024, which foresees the creation of a dedicated regulatory body under the Ministry of National Economy. This body will be responsible for overseeing both notifying and market surveillance duties as required by the AI Act, ensuring the possibility of 'one-stop-shop' administration for AI-related matters. It will also serve as the sole point of contact for fulfilling regulatory tasks related to the Act, simplifying procedures for AI developers and businesses.
In addition to these responsibilities, the future regulatory body will also be tasked with creating and operating a regulatory sandbox, a controlled environment that allows developers to test AI systems before market deployment. This sandbox will ensure that AI technologies can be developed and tested in compliance with safety, legal, and ethical standards, promoting both innovation and adherence to regulatory requirements.
A distinctive feature of Hungary’s approach is that, unlike in several other EU Member States, the responsibilities for AI regulation will not fall under the jurisdiction of the Data Protection Authority. Instead, the creation of a dedicated regulatory body emphasises the broader interdisciplinary nature of AI regulation, recognising that AI extends beyond data protection. This approach reflects a more comprehensive strategy for addressing the wider societal, economic, and technological impacts of AI but is at odds with the views expressed by the EDPB in its July 2024 Statement which recommended that Member States designate their Data Protection Authorities as their Market Surveillance Authorities under the AI Act.
The decree also envisions the establishment of the Hungarian Artificial Intelligence Council, a body comprising representatives from several key national institutions, including the National Media and Infocommunications Authority (NMHH), Hungarian National Bank (MNB), Hungarian Competition Authority (GVH), National Authority for Data Protection and Freedom of Information (NAIH), Supervisory Authority for Regulated Activities (SZTFH), and the Digital Hungary Agency. The Council will provide strategic guidance and official opinions on AI-related regulatory and policy matters. Its composition reflects the complexity of AI regulation, requiring insights from various sectors to address the multifaceted legal and compliance challenges AI presents. This wide-ranging representation highlights the fact that AI governance encompasses diverse legal fields, including data protection, financial regulation, competition law, cybersecurity, and telecommunications and media law.
The broad representation in the Council underscores the challenge that AI development and compliance present for companies. Businesses developing and deploying AI systems will need to navigate not only the specific requirements of the AI Act but also the intersecting regulations from various legal domains. The holistic, multidisciplinary approach is intended to ensure compliant and ethical AI operations. The increasing complexity of AI governance highlights the growing importance of responsible digital corporate governance in the ongoing digital transformation. Without such an approach, businesses will face increasing difficulty in ensuring AI systems are both compliant and aligned with the numerous regulatory requirements across sectors. This also means that non-compliance with AI regulations could result in multiple penalties under different laws, in addition to the AI Act's own substantial fines.
The decree sets a deadline of 30 November 2024 for the Minister of National Economy to prepare a proposal outlining the necessary legislation, related measures, and an assessment of the impact on the central budget. This proposal will detail the steps required to establish the regulatory body, the sandbox, and the council. More specific information about these developments will become available after this date.