Our quarterly AI newsletter provides analysis of key recent industry, legal and sector developments in AI with a focus on commercial technology, digital and data in the EU and UK.
Debbie Heywood and Alexander Schmalenberger look at what seems to be in store for reform of the EU AI Act
The European Commission is set to unveil its Digital Omnibus proposal on 19 November 2025. This is intended to simplify the digital regulatory framework in the EU, particularly in relation to potentially overlapping obligations.
This briefing is based on leaked draft documents (note that references here refer to the leaked documents and not the law as it is). The final proposals may differ, but the general direction of travel appears set.
The leaks show the package is split into two main proposals: one focusing on 'quick fixes' for the AI Act, and a second, more complex proposal amending the data acquis, most notably the GDPR. Here we focus on the likely changes to the EU AI Act.
The first draft proposal addresses implementation challenges in the AI Act (Regulation (EU) 2024/1689), offering significant commercial and procedural clarifications. Important proposals include:
The second draft is arguably more significant, as the leaked proposals fundamentally amend the GDPR to create new, explicit legal bases for AI development as follows:
This package is likely to create considerable policy tensions, so we expect changes throughout the legislative process, and potentially also before publication as compared with the leaked versions. Watch out for more on the proposals from us when they are formally published by the Commission on 19 November.
Morgan Acton and Louise Popple analyse the High Court's ruling on secondary copyright infringement and its implications for AI training and UK legislative developments.
On 4 November 2025, the highly anticipated Getty Images v Stability AI judgment was handed down.
Getty achieved a limited win on trade mark infringement but lost its significant secondary copyright infringement claim. The trade mark output claim is particularly fact-specific to Getty's case and is discussed in further detail here. The main take away is that it is proof of concept that AI developers can be liable when third parties' trade marks appear in outputs. However, Getty's loss of its claim for secondary copyright infringement has wider implications, discussed further here. For background and further details of the trial, see here and here.
The case was closely watched as the first UK Court action on copyright law and generative AI, addressing the key legal tensions surrounding whether the training and deployment of AI models infringes copyright. With a global media giant facing a leading AI developer, the case had potentially significant implications for both the creative and AI sectors at a time when stakeholders are urgently seeking clarity on the interaction between copyright law and AI.
Before the trial, Getty’s claims included:
During the trial, Getty dropped the primary copyright infringement training and development and output claims. Although Stability admitted to using Getty's images in training their Stable Diffusion model, Getty failed to establish that the actual training process occurred within the UK, with evidence indicating that Stability's computational infrastructure and servers were located abroad, placing the training activity outside UK territorial jurisdiction. Given that copyright is a territorial right, Getty dropped its training and development claim.
By trial, Getty's output claim had narrowed to just thirteen examples of alleged infringement, and Stability had already blocked the prompts that Getty's output claim relied on. This meant that had Getty pursued this claim, it would have faced significant evidentiary hurdles, such as proving that outputs copied a 'substantial part' of its images. This looked difficult as none of Getty's examples was particularly unique/original and there was a limited connection between the input and output works. Getty also had difficulty proving its right to bring a claim as an exclusive licensee and various other chain of title issues.
This narrowing in scope was disappointing for those awaiting potential guidance on the core copyright issues surrounding AI training and outputs.
Getty's allegation of secondary infringement is that Stability has imported into and distributed in the UK an 'article' which is – and which Stability knows or has reason to believe is - an 'infringing copy' of Getty's copyright works contrary to sections 22 and 23 of the Copyright, Designs and Patents Act 1988 (CDPA). Getty says that the 'article' is the model weights (that part of the model that learns patterns and statistics during training and determines how an AI model processes inputs to produce outputs).
Getty did not argue that the model weights contain (or have ever contained) any kind of copy of Getty's works. Crucially, it did not argue that the weights 'effectively' store the works by containing some sort of 'residue' or representation of the works or instructions to render them. The court decision is therefore predicated on the basis that the model weights do not and never have stored Getty's copyright works. Other litigants might chose to argue this aspect of the case differently.
Rather, Getty argued that the making of the model weights involved copyright infringement. More specifically, it argued that 'but for' the copyright works existing in the training dataset (which the parties agreed they did), the model weights could not have been created. It argued this because section 27 of the CDPA says that an article is an 'infringing copy' if its making would have constituted an infringement had it occurred in the UK.
The Court dismissed the secondary copyright claim, holding:
The Getty decision hasn't delivered the much-needed answers to fundamental questions surrounding AI training and outputs, particularly as to whether using copyright works to train AI models amounts to copyright infringement, nor does it remove the risk of copyright infringement claims for AI developers.
Having said that, the judgment does bring the UK's primary and secondary copyright infringement frameworks and how they interact into sharp focus. Whether the UK government factors this into the outcome of its ongoing Copyright and AI Consultation remains to be seen. Similar questions are being considered in ongoing litigation in the US and EU.
While the outcome of this case has proved less definitive than originally hoped, this is partially because it wasn't without its issues. Had Getty been able to overcome the evidential burdens and territorial aspects of the dropped primary copyright infringement training and output claims, the outcome could have been very different. While there have been numerous learnings for both copyright owners and AI developers, for now, the underlying issues are far from decided.
Alexander Schmalenberger looks at the key elements of the EC's draft guidance and reporting template for serious incidents involving high-risk AI systems.
The European Commission published draft guidance and a reporting template for serious incidents involving high-risk AI systems under the EU AI Act on 26 September 2025. These documents, which were subject to public consultation until 7 November 2025, provide crucial practical implementation details for Article 73 AI Act's serious incident reporting obligations.
We outline some key points arising from the guidance and you can read about them in more detail here.
The guidance transforms the previously abstract legal definition of 'serious incident' into concrete, actionable criteria. A reportable serious incident occurs when an AI system malfunction directly or indirectly causes one of four specific outcomes:
Primary responsibility rests with high-risk AI system providers. On becoming aware of a potentially serious incident with an established or reasonably likely causal link to their system, strict deadlines apply:
The Act permits incomplete initial reports followed by complete submissions to meet these deadlines.
After reporting, providers must immediately investigate incidents, conduct risk assessments, and implement corrective actions. Crucially, AI systems must not be altered in a way that could affect subsequent evaluation of the causes before the authorities are informed.
Deployers are required to inform providers 'immediately' – interpreted pragmatically as within 24 hours – on identifying serious incidents.
The draft guidance addresses concerns about duplicate reporting obligations. For high-risk AI systems in sectors with existing equivalent reporting requirements – financial services (DORA), critical infrastructure (NIS2, CER), or medical devices (MDR) – a simplified procedure applies. The AI Act reporting obligation is triggered only for fundamental rights infringements; all other incidents are reported exclusively under sectoral laws, preventing redundant bureaucracy while ensuring legal certainty.
The new template structures reporting into five sections: administrative information identifying the reporter, timing, and relevant authority; AI system information including EU database identification; detailed incident information covering causality and affected parties; provider analysis documenting investigation results and actions taken; and general comments including legal disclaimers.
These draft documents make the AI Act's requirements tangible for companies developing or deploying high-risk AI systems. The public consultation, closed 7 November 2025 and final guidance will be published in due course.
Debbie Heywood looks at what the UK government has (and hasn't) recently said on regulating AI in the UK.
Is there still going to be a UK AI Bill and if so, what will it cover? The Bill, announced in the King's Speech of 17 July 2024 was described as "seek[ing] to establish the most appropriate legislation to place requirements on those working to develop the most powerful AI models”. While there was no elaboration on what the legislation might cover in the background briefing notes to the speech, it was widely assumed that any proposed legislation would be far less comprehensive than the EU's AI Act but would focus on safety of frontier systems. It was also rumoured that the outgoing Conservative government had been working on draft legislation given the UK's active role in setting up the first in a planned series of AI Safety Summits, and that this would form the basis of a Labour Bill.
We're now approaching the end of 2025 and, while there has been a consultation on AI and copyright (results pending), there has been nothing on an AI Bill itself and the indications are that we won't see anything this year. Reports suggest nothing will be published until a decision has been taken on whether to include an AI Bill in the spring 2026 King's Speech. Arguably more noteworthy than the delay though is the distinct shift in tone from the government. The focus appears to have moved away from AI safety in favour of growth and national security – a direction of travel it is hard not to associate with a similar shift in the US focus under the Trump administration.
Perhaps a clue lies in the government's Blueprint for AI regulation announced on 21 October 2025 alongside a call for views on an AI Growth Lab (for which responses are requested by close on 2 January 2026). The AI Growth Lab is envisaged as a cross-economy sandbox which would oversee deployment of AI-enabled products and services that are currently impeded by exiting regulation. The government proposes using supervised regulatory sandboxes to test AI in real-world conditions in healthcare, professional services, transport and advanced manufacturing. The model will enable time-limited, closely supervised modifications to specific regulatory requirements, under a licensing scheme with safeguards such as stopping testing and imposing fines if terms are breached or risks emerge.
With the launch of yet another AI initiative, there is an argument to suggest that the government may be trending back to a sector approach to regulating AI, as proposed in the Conservative government's August 2023 White Paper. The government has said it plans to cut red tape for data centres (the government also announced that OpenAI will begin hosting data in the UK) which doesn't need to be covered in an AI-specific Bill, and copyright issues may not trigger new legislation. If the move is away from AI safety, legislation may still be needed to remove any statutory barriers to the AI Growth Lab sandbox environments but any eventual AI Bill may be more limited in scope than initially envisaged.
Oz Watson provides a regulatory perspective on the digital transformation of brand influence.
The influencer marketing industry is at an inflection point. Virtual personalities powered by artificial intelligence are no longer experimental curiosities but are becoming serious commercial tools, fundamentally altering how brands engage with consumers and raising novel regulatory questions in the process.
Recent developments in China illustrate the commercial viability of AI-driven influence. A popular live-streamer deployed digital avatars of himself and his co-host on Baidu's e-commerce platform, and the six-hour session generated 55 million yuan (approximately US$7.65 million), outperforming his previous human-only livestream. Built using generative AI trained on five years of video content, these avatars replicated not just appearance but communication style, humour, and sales techniques.
This represents more than technological achievement; it signals a fundamental shift in the economics of influence. AI avatars eliminate production costs, require no breaks, and can operate continuously across multiple platforms and languages. For brands, this offers unprecedented control over messaging, consistency, and scalability - advantages that traditional human influencers cannot match. AI influencers can also be easily adapted to different territories, products or markets.
The social media and marketing communities have long emphasised authenticity, and connection with followers, as the cornerstone of influencer marketing. Yet emerging evidence suggests audiences – particularly Gen Z and Gen Alpha – may be more accepting of virtual influencers than anticipated. Platforms like Instagram and TikTok report strong engagement with AI-generated content, even when audiences know it is synthetic.
This creates a paradox for regulators and brands alike. If consumers willingly engage with content they know is artificial, does the traditional framework of 'authenticity' in advertising and influencer marketing still apply? The answer likely depends on transparency. Brands that openly disclose their use of AI avatars while delivering relevant, entertaining content appear to maintain audience trust. Conversely, attempts to pass AI-generated content as human-created, risk both regulatory sanction and reputational damage.
The proliferation of AI avatars raises several pressing regulatory concerns:
Rather than wholesale replacement, we are likely witnessing the emergence of a hybrid model. Human creators are beginning to deploy AI versions of themselves to scale content production while preserving their personal involvement in key moments. This approach may offer the best of both worlds: the efficiency and consistency of AI with the genuine connection and authenticity that human creators provide.
For brands and their legal advisers, the message is clear: AI avatars are not a passing trend but an evolving reality requiring careful navigation of transparency obligations, intellectual property considerations, and consumer protection standards. Those who approach this technology thoughtfully – with robust disclosure practices and ethical guardrails – will be best positioned to harness its potential while managing its risks. Sheerluxe recently launched an AI 'fashion and lifestyle editor' as a new team member but was met with backlash from fans. Getting the balance right will be key to maintaining trust with consumers. The age of synthetic influence has arrived; how will the regulatory framework evolve?
Karl Cullinane looks at the use of AI 'talent' in film and fashion.
Technological disruption is nothing new in the world of entertainment. Nearly a century ago talking pictures began to displace silent films in Hollywood and many years later CGI became mainstream despite Tron, being disqualified from the best visual effects category in the 1982 Oscars, because the Academy decided that its trailblazing use of CGI amounted to cheating. Now AI technology can create fully synthetic performers with distinct personalities, consistent appearances, and the ability to work without breaks, aging or ever taking a sick day.
Hollywood put artificial intelligence centre stage during the 2023 118-day writers' strike. Entertainment labour union, SAG-AFTRA, claimed studios wanted to use the likeness of scanned background actors 'for the rest of eternity' in exchange 'for one day's pay'. In that instance, the studios appeared to concede, providing for contractual protections based on explicit consent before creating "digital replicas" of performers, along with mandatory notification when using entirely 'synthetic performers'. Although these protections were heralded as an important first step, the broader legal landscape remains fragmented.
While fully AI 'performers' are now a possibility, the mixed and mostly negative response to the launch of AI actor Tilly Norwood by Xicoia, the AI division of the production company Particle6 Group, suggests that neither Hollywood nor its consumers are sold on the idea of abandoning real people. More interesting is the possibility of using AI versions of real actors to perform stunts, or even give whole performances.
Even with the challenge of making the law fit such changing circumstances, it seems certain that such 'filming' can only happen with a performer's consent. Well-known Individuals already enjoy a range of legal rights against misappropriation of their name, likeness, or voice. Courts across the world have repeatedly intervened based on the protection of intellectual property (either rights in passing off, or image rights where they exist) or defamation if the content in question causes them reputational harm beyond the income lost from real life campaigns. Data protection law as well as bespoke protections relating to AI are also likely to develop in this area.
Deepfakes pose new challenges. Their scale, cost and realism will likely outpace existing court-based processes. This challenge may be most evident on recently released text‑to‑video platforms, such as Open AI's SORA 2 product. Within days of the platform's launch, actor Bryan Cranston (of Breaking Bad fame) complained about the use of his imagine on the platform; days later praising OpenAI for acting quickly in adding protections.
Consumer preference seems to dictate the nature and extent of AI adoption in entertainment, but regulation to supplement market forces may be more likely in the modelling and influencing industry (see our article on AI avatars above). The fashion industry faced backlash for photoshopping real models to unreal proportions long before AI models existed but a recent Guess Jeans advertising campaign featuring an AI model in Vogue, caused outrage in the fashion world and led to the publication quickly clarifying that AI models have never been used in editorial content.
High fashion prefers famous faces and the novelty of AI-generated models might quickly wane if other brands were to adopt them as standard. But for fast fashion retailers which may have thousands of lines on sale at any time, using AI models can show customers what their clothes look like in a range of sizes for a fraction of the cost of running photoshoots.
Retailers would have to take care to ensure that the clothes they advertise in this way are accurately portrayed, but if they adopt unrealistic or unattainable body images for AI models that may well inspire further regulation. Advertising regulators have tried different approaches to ensure consumers are not presented with unhealthy, idealised, body images but it remains to be seen how they will enforce when virtual models are used.
Whether in entertainment or fashion, new approaches will have to be found to balance industry desires and consumer demand with appropriate measures to protect public wellbeing and talent livelihoods. It seems though that AI 'talent' won't disappear. Would anyone dare paraphrase Chaplin's famous declaration – "I give the talkies six months more. At the most, a year. Then they're done"?
Giulia Carloni looks at whether and to what extent generative AI is covered by the OSA. You can read a longer version of this article here.
Following distressing incidents involving generative AI chatbots, including cases where users created chatbots mimicking deceased children, Ofcom issued on open letter in late 2024, clarifying how the UK's Online Safety Act (OSA) applies to generative AI services, however, ambiguities remain.
Generative AI chatbots fall within the Act's user-to-user (U2U) service definition when they enable users to share chatbot-generated text, images or videos with other users, including through group chat functionality. Services allowing users to create their own chatbots – 'user chatbots' - which are made available to others are also U2U services, with any content created by these user chatbots constituting regulated user-generated content.
The critical trigger is whether other users can encounter the content, regardless of whether the uploading user intended this or whether other users actually encounter it. Services like Girlfriend GPT that allow users to share characters would qualify as U2U platforms, whilst private chatbots like Replika, where users engage individually without sharing, may fall outside the regime.
Generative AI tools modifying search results or providing live internet results qualify as search services under the OSA. Additionally, platforms with generative AI tools capable of generating pornographic material face specific age assurance requirements.
Essentially, in-scope AI-generated content is regulated in the same way as human-generated content. In relation to the provisions applying to content harmful to children, chatbot outputs are treated as though generated by humans and are assessed in the same way. The entire range of OSA obligations will apply, including age verification for primary priority content, effective takedown systems, and appropriate complaints mechanisms.
A significant ambiguity concerns illegal content and whether chatbots can fulfil the mental element requirement in relation to committing crimes under section 192(6). The alternative is that their actions are ascribed to providers or users but the answer is unclear. Section 59(12) suggests bot-generated content can amount to an offence but does not resolve the mental element question.
For provider-controlled chatbots integrated into platforms, it is unclear whether their outputs constitute regulated content triggering safety duties, as service providers need only act on user-generated content, rather than on provider-controlled account outputs. Search services face different scope definitions, with search results not limited to third-party content replication, though questions about chatbot content as criminal content remain open.
There are also questions as to whether generative AI services control underlying search engine functionality - crucial for determining regulatory classification - and where boundaries lie between search and chatbot functions, particularly as services like ChatGPT now incorporate search capabilities.
Despite Ofcom's guidance, there are unanswered technical questions potentially affecting protection completeness. Existing mitigation frameworks designed for other content types may prove insufficient for chatbot-specific risks.
Providers face the challenge of conducting thorough service assessments against OSA definitions while implementing adaptable safety measures for evolving regulatory expectations. The regulatory landscape will continue evolving through guidance, industry engagement and strategic enforcement action, with clarity potentially emerging through enforcement cases rather than guidance alone, creating uncertainty for compliance planning.
Katie Chandler and Esha Marwaha look at how the courts might apply established principles to AI liability disputes.
AI disputes are no longer theoretical. As autonomous systems are deployed across industries, we can expect to see the first wave of complex, multi-party litigation reaching the courts, demanding that lawyers, judges and experts navigate domains of engineering, computer science and statistical inference that increasingly push the boundaries of traditional commercial litigation practice. Here we examine how English courts may adapt established principles to the realities of AI disputes which are likely to involve multiple parties across the supply chain with each participant contributing to, but not solely controlling, the final system's behaviour.
AI systems do not operate according to deterministic rules, and their performance depends on the data that shapes them, the environments in which they operate, and the probability models within their design. When, for example, a contract promises 'safe, intelligent' performance, courts must decide whether those words create an absolute guarantee or an obligation to exercise reasonable skill and care. The enforceability of limitation clauses will be tested under the Unfair Contract Terms Act 1977, with courts considering whether parties could reasonably exclude liability for systems that actively create danger rather than merely failing to prevent it.
The allocation of responsibility for human intervention is now one of the most significant features of AI contracting, reflecting an attempt to bridge the gap between human judgment and machine autonomy. In order to assess liability, courts will need to consider not merely whether human oversight was contractually required, but whether it was practically feasible and whether the party bearing that obligation had sufficient information and control to discharge it effectively.
Determining what counts as reasonable care for AI developers is far from straightforward, as these systems learn, adapt and change over time. The relevant benchmark may need to draw on emerging industry standards, and failure to implement technical documentation, risk management systems or post-market monitoring will likely be treated by English courts as evidence of falling below the standard of reasonable care. Novel and state of the art expert technical evidence will be required and likely evolve rapidly as these issues develop.
Causation in AI disputes rarely follows a single line, with each actor in the supply chain potentially contributing to the loss. The traditional 'but for' test offers little guidance when an outcome arises from the interaction of multiple dynamic systems, and courts may instead adopt a material contribution approach, asking whether each party's conduct materially increased the risk of harm. Resolving such disputes will require not only expert evidence but statistical inference, forcing courts to weigh the relative contribution of several causes, none of which can be isolated entirely.
Unlike traditional product liability claims, the key information lies not in physical defects but in data, logs and code, with disclosure needing to extend to training datasets, version control logs, software updates, testing protocols and post-deployment monitoring data. Expert witnesses across multiple disciplines will be needed, and courts may require preliminary tutorials to grasp the basic science before hearing evidence.
These are just some of the considerations for AI liability disputes which will be multi-party, data-driven and technically complex, yet anchored in the familiar structures of contract and negligence, marking the beginning of a new evidential era where understanding how an algorithm learns may one day be as important as understanding what a contract says.
To see these issues applied in a fictional case study, read a longer version of this article here.
Alexander Schmalenberger compares California's approach to regulating AI under Senate Bill No. 53, with the EU AI Act.
The EU and California have emerged as pioneers in AI regulation, but their differing approaches create a complex compliance landscape for global organisations. The EU AI Act establishes a comprehensive, risk-based approach to govern the entire AI market, while California's Senate Bill No. 53 adopts a more focused strategy specifically targeting developers of high-capability 'frontier' AI models.
The EU AI Act aims to create a legal framework for AI, with a core philosophy that is preventive and market-wide, aiming to foster 'trustworthy AI' by ensuring that safety and fundamental rights are protected before a system is placed on the market. It takes a risk-based approach, categorising AI systems into four distinct groups, which are not mutually exclusive and applying obligations according to level of risk. Unacceptable risk systems are prohibited, minimal risk systems remain largely unregulated, with the most onerous obligations applying to high-risk systems and limited risk systems (like chatbots) subject to transparency requirements.
A key feature of the AI Act is its extraterritorial scope, as it applies not only to EU entities but to any provider whose AI system's output is used in the Union, with enforcement backed by a new European AI Office, empowered to impose substantial penalties of up to EUR35 million or 7% of global annual turnover.
In the absence of a federal AI statute in the USA, California has enacted Senate Bill No. 53, the Transparency in Frontier Artificial Intelligence Act (TFAIA), to "create more transparency," acknowledging that "collective safety will depend in part on frontier developers taking due care ... proportional to the scale of the foreseeable risks".
The legislation's scope is narrow, applying only to 'frontier developers' - those training a 'frontier model' using more than 10^26 computing operations, with a higher compliance burden placed on 'large frontier developers' with annual gross revenues exceeding US$500 million.
Key obligations include:
The two frameworks reveal fundamentally divergent regulatory philosophies.
Preventive versus post-development approach: the EU AI Act establishes a preventive, pre-market regulatory model with risk assessment based on the intended application of an AI system, while the TFAIA institutes a post-development framework that predicates obligations on a developer's corporate status and the computational resources expended to train a model.
The divergent paths taken by the European Union and California in regulating AI present significant compliance challenges for global organisations with potential classification as a high-risk system provider under the AI Act and a large frontier developer under the TFAIA and two different sets of rules. The most prudent strategy for navigating this fragmented landscape is to develop a unified AI governance framework that integrates the EU's rigorous risk management principles with California's transparency and proactive incident reporting duties.
Orsingher Ortu's Enrico Pernice looks at Italy's new AI law.
On 23 September 2025, Italy enacted Law No. 132/2025 on Artificial Intelligence (Italian AI Law), which entered into force on 10 October 2025. This is the country’s first comprehensive legislative framework on AI. Rather than imposing new obligations, it implements and complements the EU AI Act, ensuring coherent and uniform application within the Italian legal system.
The law promotes a responsible, transparent, and human-centric approach to artificial intelligence, safeguarding fundamental rights, data protection, safety, and fairness. Its guiding purpose is to uphold human autonomy, democratic integrity, and the protection of citizens’ rights. Special attention is paid to freedom of expression, media pluralism, and equal access to AI systems for persons with disabilities.
Access to AI systems by children under 14 requires the consent of parents or legal guardians. Teenagers aged 14 to 18 may consent autonomously, provided the information is presented in clear, age-appropriate language.
AI tools used to improve working conditions, safety, and productivity, must preserve human dignity and transparency. Employers are required to inform employees in advance when AI is used for automated decision-making or monitoring, especially in recruitment, task allocation, performance evaluation, or dismissal.
In the healthcare sector, AI is recognised as a tool that supports prevention, diagnosis, treatment, and inclusion, in full respect of patients’ rights and non-discrimination principles. The law classifies as public interest the processing of health data for research by non-profit entities, IRCCS institutes, and healthcare organisations, and authorises the secondary use of anonymised, pseudonymised, or synthetic data, provided patients are adequately informed.
Only AI-assisted works that reflect a substantial human intellectual contribution are eligible for copyright protection. Fully autonomous AI-generated works are excluded. The training of AI models may rely on text and data mining (TDM) exceptions under Italian copyright law, provided the sources are lawfully accessible and the use is for scientific research or has not been expressly reserved by rightsholders.
The unauthorised dissemination of AI-manipulated images, videos, or audio capable of misleading the public and harming individuals is punishable by imprisonment from one to five years.
The government is tasked with adopting, within twelve months, legislative decrees establishing a comprehensive framework for the use of data, algorithms, and mathematical models in AI training.
The National Cybersecurity Agency (ACN) and the Agency for Digital Italy (AGID) are designated as the national supervisory authorities for AI, while existing regulators such as the Data Protection Authority (Garante) retain their jurisdiction.
Beyond the private sector, the law also extends to public administration, justice, national security, and economic development, positioning Italy as one of the first EU Member States to translate the EU AI Act into a domestic governance framework for trustworthy and human-aligned artificial intelligence.
ECIJA's Carlos Rivadulla Oliva looks at Spain's AI Supervisory Agency guidelines on complying with transparency requirements under the EU AI Act.
On September 2025, the Spanish Artificial Intelligence Supervisory Agency (AESIA) published its first interpretative guidance on transparency obligations under Article 50 of the AI Act. These recommendations aim to help providers and deployers ensure transparent and trustworthy AI use.
Article 50 of the AI Act establishes four main situations in which transparency duties apply:
In all cases, disclosure must be clear, visible, and accessible, and provided no later than the first interaction or exposure of the person to the AI system or its outputs.
Although not legally binding, AESIA’s Guidelines offer valuable interpretative and operational guidance:
Compliance with Article 50 will entail both technical and interpretative challenges. Ensuring that transparency tools are resilient against manipulation while maintaining usability will require careful engineering. Likewise, determining when disclosure can be waived due to 'obviousness' may give rise to regulatory uncertainty and litigation.
Despite these hurdles, transparency remains a cornerstone of the EU’s trustworthy AI framework. The AESIA guidelines constitute one of the first national efforts to operationalise Article 50, offering a practical roadmap for organisations to communicate the artificial nature, limitations, and intended uses of AI systems. Providers and deployers targeting the Spanish market should use these recommendations to update their documentation, labelling, and user disclosures ahead of enforcement but the guidelines may also prove helpful more widely, especially where Member States have not produced their own guidance.
Christian Frank looks at the impact of GEMA's victory against OpenAI in the Regional Court of Munich.
In a ruling on 11 November 2025, the Regional Court of Munich I essentially upheld GEMA’s claims for injunctive relief, information, and damages against two companies in the OpenAI group (Case No. 42 O 14139/24). The court dismissed claims based on a violation of general personality rights arising from incorrect attribution of modified song lyrics. GEMA – the German collecting society for musical performing and mechanical reproduction rights – sued OpenAI for copyright infringement concerning the lyrics of nine well-known German authors, arguing that the lyrics were memorised by OpenAI’s language models and, when the chatbot was used, were reproduced in large parts verbatim in response to simple user queries. OpenAI argued that its language models do not store or copy specific training data, but reflect – within their parameters – patterns learned from the entire training dataset. Because outputs are generated in response to user prompts, responsibility for any output lay with the user as the creator. In any event, any legal infringements were said to be covered by copyright limitations, in particular the text-and-data mining (TDM) exception.
The court held that GEMA is entitled to the asserted claims both for (i) reproduction in the language models and (ii) reproduction in outputs. In the court’s view, memorisation within the models and the subsequent reproduction of lyrics via the chatbot each infringed exclusive rights of exploitation. None of the relevant exceptions applied – in particular the TDM exception in § 44b UrhG / Art. 4 DSM Directive. The lyrics were found to be reproducible using ChatGPT-4 and ChatGPT-4o. The court observed that "memorisation" is recognised in information-technology research: large language models do not merely extract information from the training dataset during training; they may also adopt training data in their post-training parameters. Here, memorisation was established by comparing the lyrics in the training data with the model’s outputs. Given the complexity and length of the songs, the court ruled out coincidence as the cause of the reproductions.
Read more about this and related decisions here.
Alexander Schmalenberger looks at the Apple-commissioned study suggesting LRMs have fatal reasoning limitations – and at the counterarguments.
How smart are the world’s most advanced artificial intelligence models? Can they genuinely solve complex problems, or are they just exceptionally good at simulating thought? This is one of the most pressing questions in technology today, and a fascinating scientific debate, sparked by two recent studies – one even “co-authored” by an LLM – brings us closer to an answer.
The discussion centres on a bold claim: that even the most powerful Large Reasoning Models (LRMs) suffer a "complete accuracy collapse" when faced with sufficiently complex tasks. But is this a genuine limitation of AI, or a flaw in how we test it?
In an Apple-commissioned study titled "The Illusion of Thinking", published in June 2025, researchers Shojaee et al. put leading LRMs, such as Claude 3.7 Sonnet Thinking and DeepSeek-R1, through their paces. These models are designed to ’think out loud’, generating detailed reasoning steps before giving a final answer.
The researchers used a series of classic logic puzzles - like the Tower of Hanoi and River Crossing - to precisely control the difficulty and analyse the AI's performance. Their findings were stark and pointed towards a fundamental weakness:
The conclusion of the study was clear: beyond a certain threshold, the reasoning ability of these advanced AIs simply breaks down.
In a direct response, a commentary by Lawsen – supported by LLM “co-author” Claude Opus - (2025) titled "The Illusion of the Illusion of Thinking" challenges these findings. Lawsen argues that the 'reasoning collapse' is not a failure of the AI, but a failure of the experimental design.
The critique is precise and multifaceted:
This debate highlights a profound challenge: our ability to evaluate AI may be lagging behind the technology itself. The so-called collapse of reasoning might indeed be an illusion, created by evaluation methods that aren't sophisticated enough to distinguish between genuine intelligence and practical constraints.
Moving forward, the field needs better benchmarks. Researchers must verify that puzzles are solvable, use metrics that reflect true computational difficulty, and allow for different forms of solutions.
For all of us who use AI, this debate is a crucial reminder. The 'black box' nature of these systems, combined with their tendency to 'hallucinate' and provide incorrect information, demands critical oversight. The principle of AI literacy - the ability to critically assess AI-generated content - is no longer a niche skill but a necessity. We must resist 'automation bias', the tendency to over-rely on automated systems.
Ultimately, the responsibility for any AI-generated result lies with the human operator. These powerful tools require equally powerful scrutiny. Their answers should be the starting point for our judgment, not the end of it.
Debbie Heywood discusses the latest UK developments in the ongoing debate over the use of copyright materials to train AI.
The planned reform of the UK's data regime under the Data (Use and Access) Bill threatened to be derailed not by the data-related provisions, but because it was 'hijacked' by the AI copyright debate. Concerns that the government favours allowing data scraping and use of copyright materials to train AI unless the rightsholder opts-out (similar to the EU Copyright Directive TDM exception) in its ongoing consultation on copyright and AI, led the House of Lords to push for rightsholder protections in the DUA Bill. Ultimately, the Lords gave way but not before forcing a number of relatively minor amendments into what became the Data (Use and Access) Act on 19 June 2025.
The additions (in ss135-8) require the Secretary of State to publish an assessment of the economic impact of the four options proposed in the consultation on AI and copyright within nine months of the DUA Act's date of Royal Assent. The Secretary of State must also publish a report on the use of copyright works in the development of AI systems and consider the four consultation options. This report must consider and make proposals in relation to:
Considerations must cover the impact on stakeholders on access from outside the UK, and take into account consultation responses. The Secretary of State must also publish a progress report within six months of the DUA Act getting Royal Assent.
During the Act's passage, the government consistently argued that it was not appropriate to deal with the complex issue of copyright and AI in the DUA Act, but also rowed back from supporting a particular stance on the issue pending the outcome of the consultation.
Adding weight to the government's case was the fact that the High Court was about to hear the long-awaited Getty Images v Stability AI case. On 25 June 2025, Getty Images dropped its claim of primary copyright infringement against Stability AI, citing lack of insufficient evidence about how and where Stability trained its models. Getty is, however, still pursuing claims for trade mark infringement, passing off, and secondary copyright infringement and has said it may pursue a primary copyright claim in other jurisdictions. The trial has now concluded with judgment expected in the autumn. Even with the dropping of the primary infringement claim, the decision, still stands to have a significant impact if importing the AI model into the UK is found to infringe.
With the government reports on AI and copyright at least nine months away and the Getty decision yet to come, it seems unlikely we will see a final decision on whether and how to legislate on copyright and AI in the UK, much less an actual legislative proposal, this year.
Benedikt Kohn, Jakob Horn, Caroline Bunz and Mellissa Guimaraes look at the status of key AI Act guidelines including the GPAI Code of Practice and guidance on high-risk AI systems.
The EU AI Act is the first European Regulation to establish harmonised rules on the use of AI. Its objective is not to stop the development of these systems, but rather to ensure that they are trustworthy. The AI Act obliges providers, deployers, importers, product manufacturers, and authorised representatives intending to bring an AI system to market to fulfil a set of requirements, including compliance procedures and reporting obligations. Organisations found to be in breach of the AI Act may face fines of up to EUR 35 million.
The implementation of the AI Act takes place in phases to give businesses sufficient time to build the necessary structure for compliance. Four months have already passed since the chapters on General Provisions and Prohibited AI Practices came into effect on 2 February 2025. The European Commission has been providing detailed guidelines to support businesses and institutions in aligning with the AI Act. The second major application phase begins on 2 August 2025, when a larger part of the AI Act will come into effect. This includes obligations relating to General Purpose Artificial Intelligence (GPAI) with and without systemic risk (Chapter 5), notifying authorities and notified bodies in the event of high-risk systems (Chapter 3, Section 4), and penalties (Chapter 12).
The Code of Practice (CoP) for general-purpose AI (GPAI) models is intended to offer providers of GPAI models important support in complying with the AI Act. The CoP focuses on three main areas: transparency, copyright, and safety and security. The provisions in the AI Act on GPAI can apply to large providers but may also impact smaller developers if their models serve a broad range of tasks. Businesses that adhere to the CoP will be able to demonstrate compliance with the obligations for providers of GPAI under Article 53(1) AI Act. Further information can be found in our analysis of the third draft of the CoP.
The AI Act explicitly rules that the provisions on GPAI become effective on 2 August 2025. While the AI Act itself provides that the CoP should have been completed by 2 May 2025, finalisation has been delayed. The AI Office held a workshop with GPAI providers on 2 July 2025 and is expected to present the final version of the CoP more widely on 3 July 2025 (unfortunately too late for this publication). The final version is expected to be published in mid-August 2025 alongside guidelines for the GPAI rules and how the CoP will support compliance.
This poses a potential challenge for businesses as they will have to implement the requirements of the CoP very quickly if they intend to rely on it. However, reports suggest that a grace period will apply to signatories to take account of the delay. The final CoP is now expected to include copyright safeguards aligned with EU law, and the accompanying guidelines will clarify key concepts such as model thresholds, open-source exemptions, and monetisation. Before the CoP can be recognised as a formal compliance mechanism, it must undergo an adequacy assessment by the European AI Board, with adoption targeted for mid-August.
High-risk AI systems also form a central part of the AI Act, with a long list of requirements that still require further clarification. In accordance with the AI Act, an AI system can be classified as high-risk if it poses a significant threat to health, safety, or the fundamental rights of individuals.
In this context, the European Commission has launched a public consultation targeting not only stakeholders but also providers, developers, and other institutions. This focuses on gathering practical examples and issues related to the classification of high-risk AI systems, including the interpretation of key legal concepts such as "intended purpose". The results will contribute to upcoming guidelines on classifying high-risk AI systems and will also serve other regulatory purposes. The consultation opened on 6 June 2025 and will close on 18 July 2025.
This initiative is required under Article 6(5) of the AI Act, which obliges the Commission to provide further guidelines to support a comprehensive understanding of the classification of high-risk systems and to assist in meeting the requirements of the AI Act. It also sets out rules on the practical application of these requirements and the obligations of operators, including additional responsibilities that may arise along the AI value chain. As previously mentioned, compliance with these rules is crucial to demonstrate conformity with the AI Act and avoid penalties.
Affected businesses will be hoping the EC can publish a final version of the CoP and other relevant guidance well in time to allow them to prepare for compliance. While the drafts give an indication of the direction of travel, final versions are eagerly awaited.
Benjamin Znaty looks at how the fast-paced evolution of AI systems is rapidly challenging existing safety and legal safeguards.
In May 2025, Enkrypt AI, a company specialising in AI security solutions, released a widely cited report demonstrating how easily today’s most advanced AI models can be manipulated into producing illegal and harmful content. The study focused on systems such as Mistral’s Pixtral and DeepSeek’s R1, which were found to frequently respond to adversarial prompts with highly dangerous outputs. Alarmingly, the models were found to be up to 60 times more likely than benchmark systems like GPT-4o or Claude 3.7 to generate child sexual exploitation content when probed with disguised prompts. They also provided detailed guidance on modifying chemical weapons, including methods designed to increase their potency and persistence. What makes these findings particularly disturbing is not only the nature of the outputs but also the ease with which they were obtained. Testers were able to trigger these responses using seemingly innocuous inputs, such as uploading a blank numbered list and asking the model to “fill in the details”.
These are not isolated flaws. They reflect a deeper, systemic issue in the architecture and deployment of the most modern AI models. These findings resonate with the conclusions of the international AI Safety Report, presented at the AI Action Summit in Paris in February 2025. Compiling insights from 96 global experts, the report categorises the risks posed by general-purpose AI into three key areas: malicious use, model malfunctions, and systemic threats. “Malicious use” refers to the intentional deployment of AI to harm individuals, organisations, or society at large. While mitigation techniques are being explored, the report emphasises a hard truth: there is currently no reliable technical method to detect or suppress harmful AI-generated content, and the pace of AI progress continues to overcome available safeguards.
The hyper-competitive landscape to build and release more capable models only magnifies these risks. As providers prioritise performance, speed, and market differentiation, safety could potentially become a secondary concern. Grok 3, for instance, was introduced as an "uncensored" alternative to mainstream models, designed to answer prompts that others would reject. According to users, it allegedly generated hundreds of pages of instructions detailing how to execute a chemical attack, including sourcing of materials and deployment strategies. These threats are far from theoretical. The AI Safety Report further highlights that when traditional precursors to dangerous materials are restricted, expert chemists can often identify alternative synthetic routes. AI can help automate and accelerate this discovery process. Other studies similarly warn that generative models may undermine existing safeguards around DNA synthesis by revealing new ways to access restricted sequences.
This represents a dangerous inflexion point. No previous technology has so dramatically lowered the barriers to producing and disseminating harmful content, whether deepfake pornography, grooming instructions, or weaponisation blueprints. But it also revives a longstanding dilemma in the history of innovation: how do we reconcile the tangible benefits of breakthrough innovation with its diffuse, long-term societal risks? Can regulation offer a path forward here?
The EU AI Act introduces a tiered, risk-based framework that mandates transparency, oversight, and conformity assessments for high-risk systems. However, the proposed AI Liability Directive, intended to clarify legal responsibility in cases of AI-related harm, was withdrawn following substantial resistance from stakeholders. This leaves legal uncertainty where high-risk AI outputs may circulate without clear attribution or accountability.
Yet regulation has a critical role to play here. By enforcing transparency, defining liability, and reframing how AI is understood and governed, it can begin to realign incentives across the ecosystem. Algorithmic transparency, in particular, can help dismantle the illusion of AI as an all-knowing oracle, and, in doing so, reshape the direction of innovation itself. The ultimate goal must be to move beyond seeing safety as a mere compliance checkbox. It should become a core driver of innovation. Until that shift takes hold, we risk remaining stuck in a dangerous paradox: the most powerful AI systems ever developed may also be the most hazardous to our collective future.
Karl Cullinane looks at the complexities of using fully automated AI-driven advertising.
The Wall Street Journal recently reported on Meta’s ambitious plan to fully automate its advertising ecosystem by 2026. The company aims to create, deploy, and optimise ads with minimal human involvement. The transformative nature of this initiative is already evident: shares of large advertising agencies fell noticeably on the news, while Meta’s stock price jumped 3.5%. These market reactions underscore how profoundly AI-driven advertising could reshape the digital marketing landscape.
Under Meta’s plan, brands would simply provide a product image and a budget. AI would then generate imagery, videos, and text and determine optimal user targeting. This promises efficiency gains and cost savings, but it also raises significant issues around brand control and integrity. For example, AI models often inherit biases from the data used to train them, reinforcing stereotypes. Such biases risk damaging brand reputation and alienating consumers. They may also breach equality and anti-discrimination laws in various jurisdictions.
Automation also introduces significant legal complexity, particularly within the EU’s regulatory framework.
The use of AI in ad creation raises significant questions about liability and accountability. If an AI-generated ad contains false or misleading information, determining who is responsible becomes complex. The advertiser, AI developer, and platform provider could all potentially be held accountable. This is particularly relevant in cases where AI-generated content leads to consumer harm or breaches law, for example in relation to false advertising claims.
The EU AI Act introduces a shared responsibility model. While Meta, as the provider of the AI system, bears primary responsibility, advertisers deploying these tools also face obligations, particularly if they customise or fine-tune the AI outputs.
Transparency in AI-generated content is crucial for building trust and ensuring compliance with regulations. The AI Act effectively provides for the mandatory labelling of AI-generated content to distinguish it from human-generated content, aiming to prevent misinformation and deepfakes. Failure to meet transparency standards risks not only legal and regulatory penalties but also reputational fallout if consumers feel deceived or manipulated.
Generative AI complicates traditional copyright principles, which typically rely on human authorship. When an AI crafts images, slogans, or videos, questions of ownership can be complex. As a precaution, it is generally recommended to insert a layer of human intervention into the creative process. Even modest human modifications can help establish a stronger case for authorship, which clarifies legal rights.
Hyper-personalised ads hinge on collecting and processing vast amounts of user data. Meta has recently faced scrutiny from European authorities regarding its intent to use public Facebook and Instagram data to train AI models, with the Irish Data Protection Commission and the UK’s ICO challenging Meta’s reliance on legitimate interest rather than explicit user consent. Under the GDPR, profiling and automated decision-making require a lawful basis, transparency, and appropriate safeguards. Meta must ensure that its automated ad tools align with core data protection principles like minimisation and purpose limitation.
Meta’s automation strategy represents a significant evolution in digital marketing. Industry observers highlight its transformative potential, from boosting efficiency and reducing costs to broadening accessibility and scalability, while raising concerns about the possibility of sidelining human creativity. Beyond questions of cost and creativity, the path to full automation crosses complex legal terrain. In the end lawyers, rather than technological or artistic constraints, may be the driving force that prevents the complete removal of human intervention from digital advertising.
Debbie Heywood looks at the ICO's plans to help ensure lawful use of AI and biometrics.
On 5 June 2025, the ICO published its new AI and biometrics strategy which aims to ensure organisations are developing and deploying new technologies lawfully, and support them to innovate while ensuring the public is protected. The two main challenges are identified as:
The ICO says it will undertake the following.
Give organisations certainty on how they can use AI and Automated Decision-Making (ADM) responsibly under data protection law by:
Ensure high standards of ADM in central government, ensuring decisions affecting people are fair and accountable by:
Set clear expectations for responsible use of ADM in recruitment by:
Scrutinise foundation model developers to ensure they are protecting people's information and preventing harm by:
Support and ensure proportional and rights-respecting use of FRT by the police by:
Anticipate and act on emerging AI risks by:
The ICO lists work done so far on AI and ADM and biometrics in terms of guidance and enforcement and it's clear from the strategy which areas the ICO will focus on going forward. The development of foundation AI models, the use of ADM in recruitment and public services, and the use of facial recognition technology by police forces are seen as the most high-impact use cases. Transparency and explainability, bias and discrimination, and rights and redress are the most significant cross-cutting issues.
As the ICO's work on biometrics and ADM evolves, organisations will get more guidance on how to employ these technologies lawfully. The planned code of practice promises to be particularly important. However, in the meantime, compliance with data protection and other applicable law and a focus on fairness, transparency and explainability are critical.
Sasun Sepoyan provides a reminder of the EDPB's views on data protection compliance in AI models ahead of the next phase of EU AI Act implementation.
As AI systems grow in complexity and capability, questions surrounding their compliance with the General Data Protection Regulation (GDPR) have become increasingly prevalent. At the request of the Irish Data Protection Commission (DPC), the European Data Protection Board (EDPB) issued Opinion 28/2024 on 17 December 2024. The DPC had submitted four key questions concerning the lawful processing of personal data during the development and deployment of AI models.
The EDPB stresses that the Opinion only covers AI models, a term narrower than the concept of AI systems as defined in the EU AI Act. An AI model is defined as follows: “[…] to encompass the product resulting from the training mechanisms that are applied to a set of training data, in the context of Artificial Intelligence, Machine Learning, Deep Learning or other related processing contexts”, and: “[…] which are intended to undergo further training, fine-tuning and/or development, as well as AI Models which are not”.
The EDPB provides valuable guidance on four questions for developers, providers, distributors, deployers, and regulators operating within the EU’s legal landscape for AI, which can be broadly summarised as follows.
The EDPB makes clear that an AI model trained on personal data cannot automatically be considered anonymous. If an AI model is designed to reproduce personal data or can be prompted to reveal information about identifiable individuals, this constitutes the processing of personal data under the GDPR. Even if such outputs are rare, the possibility of memorisation or re-identification is sufficient to bring the AI model within the GDPR’s scope. According to the EDPB, an AI model can only be considered anonymous if: (i) the risk of directly extracting personal data used in training is insignificant for any data subject; and (ii) the risk of unintentionally revealing such data through interactions with the AI model is also insignificant for any data subject.
The EDPB acknowledges that data controllers may, in principle, rely on legitimate interest as a legal basis for processing personal data when training AI models. However, doing so requires a strict three-part assessment: (i) the interest must be a clearly and precisely articulated, lawful and real interest; (ii) the processing must be genuinely necessary to achieve the purpose provided; and (iii) the interests of the data controller must not override the fundamental rights and freedoms of data subjects. In the context of large-scale AI model training, especially when data is scraped from publicly accessible sources, the EDPB considers it not always likely that these conditions will be met. In particular, the intrusive nature, scale, and lack of transparency in such data collection may, in specific cases, tip the balance in favour of the individual’s rights, thereby precluding the use of article 6(1)(f) GDPR as an appropriate legal basis under such circumstances. When an infringement is found during the development phase of an AI model, supervisory authorities may impose corrective measures tailored to the circumstances. These can include fines, temporary processing restrictions, or ordering deletion of unlawfully processed data, ranging from parts of the dataset to the entire dataset or AI model.
During the deployment phase of an AI model, controllers must apply the same three-part legitimate interest test as in the development phase. Additional emphasis is placed on assessing risks and user expectations, as well as implementing appropriate mitigation measures such as pseudonymisation and transparency.
Lastly, the EDPB outlines three key scenarios regarding unlawful personal data processing in the development of AI models, emphasising a risk-based approach. First, if unlawful data remains embedded in the AI model, the same data controller’s continued use may be restricted or halted by supervisory authorities, especially where risks to data subjects are high, potentially requiring deletion or retraining. Second, any subsequent users (different data controllers) must conduct an assessment to ensure the AI model was lawfully developed, although limited access to information can hinder this; importantly, AI conformity certificates do not guarantee GDPR compliance. Third, if personal data is properly anonymised after initial unlawful processing, the GDPR no longer applies, and any subsequent use of the anonymised data or AI model is no longer considered unlawful.
Taken together, these conclusions reinforce a core principle: GDPR compliance must be embedded from the very beginning of AI development. It is not enough to focus solely on the nature of an AI model’s outputs; developers, providers, distributors, and deployers must ensure that every stage of the AI pipeline – from data collection to AI model deployment – fully aligns with EU data protection laws. The EDPB’s opinion not only provides clarity on current obligations but also signals more detailed regulatory scrutiny and future guidance on anonymisation and pseudonymisation – key components of a GDPR-compliant AI strategy.
For organisations that develop or use AI models and that fall within the scope of the GDPR and/or AI Act, this Opinion is a reminder to reassess data processing practices, legal bases, and compliance safeguards. At the same time, the Opinion highlights three key lessons:
Shannon Buckley Barnes explains AI-elevated attacks known as vibe hacking.
A new threat has been identified by cyber security experts: vibe hacking. Due to the increased availability of Large Language Models (LLMs) technologies, it is now possible for cyber criminals to generate code that allows them to launch not only a higher number of attacks in a shorter amount of time, but also increase their success rates. Not only can AI be used to write malicious code, hackers can also utilise the problem-solving features of LLMs to automatically rewrite malicious code in response to security measures encountered when launching attacks, meaning that they can adapt and break through such measures.
While the owners of many of the popular LLMs have introduced guardrails to prevent the generation of malware in this way, there is a concern that the ability of hackers to 'jailbreak' these technologies will lead to an increase in attacks by AI-generated and adaptive malware. Fighting AI with AI is the most likely solution to the problem with AI-assisted cyber security protections being put in place to respond to and learn from the malware that it is designed to defend against.
In light of this looming threat, it is important for organisations to be prepared.
The Cyber Resilience Act (CRA), the NIS2 Directive, the Digital Operational Resilience Act (DORA), and the General Data Protection Regulation (GDPR) each set out obligations in respect of monitoring preparedness and reporting incidents.
For example, DORA places obligations on financial entities within the EU to continuously monitor their ICT systems to detect and mitigate risks. Entities that fall within the scope of NIS2 are also required to monitor their networks and systems for threats. Organisations can meet these requirements by implementing real-time monitoring solutions and maintaining logs to detect cyber incidents.
In the event that a cyber criminal is successful in vibe hacking and an incident is detected, it is important to be aware of the strict incident reporting timelines under DORA, NIS2 and the GDPR and to have a plan in place to assess the incident and action the appropriate response. Responding to incidents requires investigation, threat isolation and mitigation of damage while working towards restoring operations and meeting reporting obligations.
In light of the developing threat posed by vibe hacking, it is important for organisations to test the effectiveness of their response plans by penetration testing and drills. These reviews help improve the organisation's security risk management by identifying any vulnerabilities and ensuring it remains ready and prepared to respond to threats in line with its obligations.
Meshah Kuevi looks at some of the government's recent announcements relating to AI funding.
During London Tech Week 2025, the UK government unveiled a series of initiatives aimed at bolstering the nation's position as a leader in artificial intelligence and technological innovation. These announcements reflect a strategic push to integrate AI across various sectors, enhance infrastructure, and attract global talent to the UK. We look at some of the key announcements relevant to AI funding and development in the UK which are intended to help put into place many of the points listed in Matt Clifford's AI Opportunities Action Plan.
In his opening speech at London Tech Week, Peter Kyle, Secretary of State for Science and Technology, outlined the government's vision for AI-driven transformation. He announced an £86 billion research and development investment programme, targeting six critical technology areas including AI, as well as advanced connectivity solutions, and cyber security infrastructure as part of a wider, long-term strategy to "build a faster, fairer economy".
The AI Growth Zones programme was also announced, which is specifically designed to stimulate investment in AI-powered data centres and support new infrastructure. Local government bodies and regional authorities, alongside private sector partners, are encouraged to register their interest, with formal applications opening in Spring 2025.
The government's contemporary Industrial Strategy, announced during the event and published on 23 June 2025, is a ten-year plan focused on eight industry sectors (IS 8) identified as having potential for growth and investment in the UK. They are advanced manufacturing, creative industries, life sciences, clean energy, defence, digital and technologies, professional and business services, and financial services. For each of the IS 8, there is a sector plan – five of which have been published - with the ones for life sciences, defence and financial services to follow. Setting up an AI and copyright framework is a key feature of the digital and technologies sector plan, along with implementing the AI Opportunities Action Plan and establishing the AI Growth Zones.
The government also announced it had formalised a strategic partnership with NVIDIA and various leading UK universities through a comprehensive Memorandum of Understanding. The main focus is on creating and successfully implementing advanced AI connectivity technology into key industries. Other major international technology corporations also announced substantial UK investments, promising to create numerous high-skilled positions nationwide. Notable companies including Liquidity, InnovX AI, and Nebius committed to establishing significant operations in the UK, particularly strengthening the AI and financial technology sectors.
The energy sector received particular attention at London Tech Week, with the showcase of ten innovative British AI solutions designed to reduce household energy costs and support clean energy objectives. These breakthrough technologies encompass AI-enabled thermal mapping drones and intelligent external heating panels. The Manchester Prize, sponsored by the Department of Science, Innovation and Technology (DSIT), provides funding for these projects, which are anticipated to contribute significantly to the UK's carbon neutrality targets.
Finally, the government announced an international talent acquisition campaign to attract leading researchers and innovators globally. This programme, supported by £54 million in funding, aims to draw exceptional expertise to the UK, enhancing research capabilities, and supporting the success of key industrial strategy sectors.
The government is hoping these actions will help cement the UK's ambitions to lead global AI and technology innovation, creating an ecosystem that promotes growth, attracts international talent, and drives significant change across diverse industries.
Sharif Ibrahim looks at the EC guidelines on prohibited AI practices with a focus on the most commercially applicable uses.
Article 5 of the AI Act prohibits certain AI practices that are considered deceitful and/or manipulative. These prohibitions entered into force on 2 February 2025. The European Commission has published and approved (but not yet adopted) its draft guidelines on the interpretation of Article 5 of the AI Act (Guidelines). Enforcement provisions will apply from 2 August 2025.
The Guidelines provide the EC’s interpretation of the meaning of the prohibitions described in Article 5 but are not legally binding and are potentially open to judicial review if challenged. However, they provide useful insights for organisations preparing for the AI Act's implementation and are therefore a helpful compliance aide.
The Guidelines are very detailed, spanning 135 pages. Here we briefly highlight three of the eight prohibitions discussed in the Guidelines, which are particularly relevant to the commercial deployment/use of AI systems.
For this prohibition to apply, four cumulative conditions must be met:
Subliminal techniques
The Guidelines elaborate on what constitutes a 'subliminal technique', namely those operating beyond (below or above) the threshold of conscious awareness. Examples of subliminal messaging are:
Purposefully manipulative techniques
The Guidelines provide examples of sensory manipulation or personalised manipulation, eg the creation and tailoring of highly persuasive messages based on an individual’s personal data.
Deceptive techniques
Examples of deceptive techniques are AI chatbots that impersonate a person's friends or relatives with a synthetic voice.
Objective or effect of materially distorting behaviour of a (group of) person(s)
Intent is not required here, but there must be a substantial impact on the people involved. There must be a plausible/reasonably likely causal link between potential material distortion of behaviour and the technique deployed by the AI system.
Distorted behaviour causes or is reasonably likely to cause harm
Harm can be physical, psychological, financial and/or economic. While it's hard to establish the threshold for harm, severity, context and cumulative effects, scale and intensity, and the affected persons’ vulnerability should be taken into account.
Appropriate measures to prevent an AI system from deploying manipulative or deceptive techniques can be:
This prohibition is very similar to the previous one, save for the second requirement - that the AI system exploits vulnerabilities due to:
For the social scoring prohibition to apply, the following conditions must be met:
In essence, the prohibition aims to prevent people from being illegitimately singled out and subjected to detrimental consequences based on profiling.
Examples of prohibited social scoring practices
The Guidelines give the following examples of scenarios that fall under the social scoring prohibition:
Examples of permissible social scoring practices
The Guidelines also provide examples of practices that are generally considered permissible:
The 135-page guidance provides a wealth of information on how, at least in the view of the EC, Article 5 of the AI Act should be interpreted. If you have any questions or wish to learn more about any of the (other) prohibitions and how to ensure your AI system does not fall within those prohibitions, please do get in touch.
Helen Farr looks at how to navigate the legal landscape of algorithmic management in employment.
In March 2025, a group of Uber drivers, supported by Organise, protested against what they described as 'automated firings', claiming they had been deactivated on the Uber platform as a result of decisions made by AI. They are not the first to raise this issue as using AI tools to hire and fire staff becomes more common, particularly in the gig economy. This has highlighted a critical dilemma concerning the use of AI in the employment space: the increasing use of algorithmic systems to make key decisions about workers. As the adoption of these technologies accelerates, employers face evolving legal considerations. In addition to issues under the current regulatory framework, further changes are proposed under the Data (Use & Access) Bill (DUA), which is progressing through Parliament.
The DUA Bill proposes a significant shift in how automated decision-making (ADM) is regulated in UK workplaces. Rather than maintaining the existing general prohibition with specific exceptions, the legislation would establish a 'general presumption to permit' ADM systems, albeit with certain safeguards, including the right for individuals to contest decisions, the ability to obtain explanations of how determinations were made, and the option to request human intervention. The aim is to reflect the UK government's dual objectives: promoting AI adoption to drive economic growth while attempting to maintain worker protections, but this must be balanced against commitments to fair treatment of workers.
Despite the proposed relaxation of restrictions on ADM in an employment context, employers implementing algorithmic management systems should be mindful of several significant legal considerations:
The lack of established case law specifically addressing AI-driven employment decisions creates uncertainty about how courts will evaluate these systems, adding an additional layer of complexity for early adopters.
The evolving regulatory landscape has several important practical implications for organisations using or contemplating the implementation of algorithmic management tools:
While algorithmic management is currently most prevalent in the gig economy, it is rapidly spreading to the conventional workforce. According to research by the Organisation for Economic Co-operation and Development spanning six major economies, adoption rates for these tools are ever-increasing, reaching as high as 90% in the United States.
As this trend continues, employers should anticipate increased objections from unions about the use of AI and algorithmic management and a focus on this as a collective bargaining issue. Further regulatory developments beyond the current regulatory framework and the emergence of technical standards for algorithmic employment tools are also on the horizon and, in time, there will be a growing body of case law to help establish clearer legal boundaries.
While the UK government appears committed to facilitating greater AI adoption in business operations, the implementation of algorithmic management systems, particularly for high-stakes decisions like terminations, involves complex legal considerations. The regulatory framework remains in flux, and the ultimate balance between innovation and worker protection is still being determined. There is a clear tension between rapid technological innovation and employment security. Despite technological advances, the fundamental principles of following a fair procedure, ensuring that any decision is within the range a reasonable employer can take, and human oversight, remain essential to avoid a claim.
ECIJA's Carlos Rivadulla Oliva looks at the implications of content 'watermarking' in AI-generated text.
In recent weeks, a subtle yet potentially far-reaching discovery has captured the attention of both the AI and legal communities: the latest ChatGPT models - GPT-3.5-turbo and GPT-4-turbo - appear to be embedding invisible 'watermarks' in the texts they generate. While OpenAI has not formally acknowledged this as an intentional feature, independent analysis, including a detailed report by RumiDocs, has revealed a consistent pattern of hidden Unicode characters - most notably the narrow no-break space (U+202F) across generated content, especially in longer outputs.
These characters, invisible to the human eye, do not affect semantics or readability. However, their systematic placement across AI outputs creates a kind of digital fingerprint that can be detected with the right tooling - diff checkers, hex editors, or character visualisers. This behaviour has not been observed in earlier versions of GPT-3.5, suggesting that these 'signatures' are a byproduct of recent reinforcement learning changes or, more plausibly, an experimental step toward watermarking at scale.
This has obvious implications, particularly within the framework of the European AI Act, which imposes clear transparency obligations on providers of general-purpose AI models. Article 50 of the Act mandates that users be informed when they interact with AI and requires that providers of foundation models adopt appropriate measures to ensure the traceability and identification of AI-generated content. If these hidden markers are indeed a form of output traceability, they may be an attempt, however rudimentary, to comply with these looming regulatory demands.
But the issue is not so straightforward. First, these invisible markers are easy to remove, often requiring nothing more than a basic find-and-replace operation. This undermines their effectiveness as a tool for content provenance or authenticity verification. Second, their presence, especially when undocumented, raises complex questions around user consent and data integrity. Are users aware that their outputs might carry hidden metadata? What happens when such content is reused, republished, or attributed?
The copyright implications are equally murky. If an AI-generated text contains hidden identifiers tied to its model of origin, could that be interpreted as a form of authorship assertion? While current copyright regimes do not recognise AI as a legal author, watermarking could complicate the ownership claims of users who rely on these tools for content creation. The presence of embedded, non-removable identifiers might also interfere with the reuse of content under open licences or infringe on the user’s ability to assert exclusive rights.
From a privacy perspective, silent watermarking walks a fine line. While transparency and accountability are valid objectives, especially in combating disinformation or academic misconduct, embedding undetectable code into user outputs without explicit disclosure could be seen as invasive. In a legal context, such practices could clash with the principles of data minimisation and purpose limitation under the GDPR, particularly if the markers are used for post-hoc tracking or behavioural profiling.
Ultimately, watermarking AI outputs - if confirmed - may be a step toward greater traceability as envisioned by the AI Act. But to serve its intended purpose, it must be done transparently, robustly, and with clear safeguards for users’ rights. Otherwise, what begins as a measure to protect against misuse could quickly become a tool for overreach.
As with many issues in AI governance, the devil is in the (invisible) details.
Gregor Schmid, Caroline Bunz, Jakob Horn and Alexander Schmalenberger look at the key issues in the EU's GPAI framework.
The landscape of Artificial Intelligence regulation is rapidly evolving, and the European Union's AI Act (Regulation (EU) 2024/1689) is a significant milestone. It introduces specific obligations for providers of General-Purpose AI (GPAI) models – the powerhouses behind systems like GPT, Llama, and Gemini, particularly concerning compliance with EU copyright law.
A crucial first step for any developer is determining whether their model falls under the AI Act's definition of a GPAI model (Article 3(63) AI Act). A GPAI model is one displaying "significant generality and is capable of competently performing a wide range of distinct tasks". Clarification on this scope is expected in forthcoming Commission Guidelines. The EC has also been working on a GPAI Code of Practice to provide practical compliance guidance. Adoption of a finalised, EU Commission-approved Code offers a pathway to demonstrating proactive compliance.
The third draft of the first GPAI Code of Practice, circulated in March 2025, refines the approach to compliance, notably for copyright obligations. Compared to its predecessor, this draft is more streamlined and introduces proportionality, aligning compliance efforts with the provider's size and capabilities.
The GPAI Code primarily targets GPAI model providers. This includes large tech companies, smaller entities whose models meet the GPAI definition, and businesses fine-tuning such models. 'Downstream providers' integrating these GPAI models should also familiarise themselves with the Code, as it will likely shape expectations and contracts.
The third draft outlined core requirements for potential Code signatories, covering copyright policies, respecting technical controls (paywalls, robots.txt for TDM opt-outs), excluding piracy domains, mitigating infringing output, and engaging with rights holders. However, the emphasis on 'reasonable efforts' and the adequacy of robots.txt has drawn sharp criticism from rights holder organisations. They argue the draft weakens copyright protection, lacks transparency for enforcement, and prioritises AI developers over creators, deeming the draft 'completely unacceptable'.
Broader concerns have also been raised by civil society and legislators regarding the handling of fundamental rights and child protection risks, which were potentially sidelined as voluntary considerations, arguably contradicting the AI Act's intent.
Understanding the relationship between the Code and the upcoming Commission Guidelines is vital. The non-binding Guidelines (under Article 96 AI Act) will focus on clarifying the scope and interpretation of the rules, including the crucial definitions of a GPAI model and 'placing on the market'. Conversely, the Code of Practice provides practical suggestions as to how providers whose models are in scope can meet specific obligations (copyright, documentation, systemic risk management). The Guidelines are expected to confirm that adhering to the approved Code is a key benchmark for demonstrating compliance.
The regulatory framework for GPAI is solidifying amidst intense debate. Defining the precise scope of GPAI models via the Guidelines is a prerequisite. The Code of Practice aims to offer a compliance pathway for models falling within that scope, though its current draft faces significant opposition regarding copyright and fundamental rights protections. Finalisation of both the Code and the Guidelines ahead of compliance deadlines will be critical but timelines have slipped. The EC failed to finalise the Code of Practice by its 2 May deadline although it still expects to have it completed some time in July. Businesses must monitor these developments, including stakeholder feedback, to ensure readiness for the 2 August 2025 application date of the AI Act GPAI provisions.
Debbie Heywood looks at the latest developments.
The UK government launched a consultation on copyright and AI in December 2024. The 'flagship' proposal is to introduce an exception to copyright which would allow AI training using lawfully accessed works, provided rights holders can opt out of such use and in conjunction with transparency requirements for AI developers. This would bring the UK regime more in line with the EU's and introduce an exception similar to the text and data mining (TDM) exception in the EU Copyright Directive. The consultation also covered issues relating to ownership of and liability for AI outputs.
The consultation closed on 25 February 2025, after receiving over 11,000 responses. Predictably, the proposals have proved controversial. Rights holders are concerned that their rights will not be adequately protected and argue that the burden should be on AI developers to get a licence to use their works rather than on them to opt out. AI developers have broadly been more positive but are concerned about burdensome transparency requirements and issues with finding a workable and uniform technical solution for opt-outs. OpenAI suggested in its consultation response that opt-out models have not, so far, proved successful. Google argued that rights holders can already prevent web crawlers from scraping their content.
AI developers (among others) have also suggested that legislating along the proposed lines will be counterproductive to the government's AI ambitions as it will act as a barrier to development and, therefore, to investment and growth.
Meanwhile, Baroness Kidron and Victoria Collins MP, among others, are continuing efforts to introduce provisions around AI and copyright into the Data (Use and Access) Bill, seeking to include a requirement on AI developers with a connection to the UK to comply with intellectual property law and disclose how they obtain training data. At a Parliamentary debate on 23 April 2025, MPs were overwhelmingly against an opt-out regime, some pointing out correctly that the EU's TDM exception has proved highly controversial and not particularly effective in the context of AI. Most MPs emphasised the need for transparency from AI developers, and for effective remuneration for rights holders, rather than a need to change copyright law itself (although many did back the proposed amendments to the DUA Bill).
At the end of the debate, Peter Kyle, Secretary of State for Science, Innovation and Technology, acknowledged that there was no currently available effective technical solution for rights reservation but said "I am determined to make it happen. Surely it cannot be beyond the wit of the clever people who are developing all this technology to develop something", adding a hopeful timeline of 12-18 months. While Kyle underlined the need to protect rights holders, he was adamant that the government does not intend to use the DUA Bill to legislate on this issue.
By way of compromise, the government has introduced amendments which would require it to publish reports on AI training transparency, technical solutions for controlling access to copyright works, and the effect of copyright on the AI market within a year, together with an economic impact assessment of policy options. The DUA Bill returned to the Lords without amendments relating to AI copyright, but they were then re-introduced. It seems we can expect a longer than usual 'ping pong' between the Houses on this issue.
The upshot seems to be that even if the government does decide to proceed along the lines proposed in the consultation, changes will depend on an effective opt-out solution and will, therefore, not be happening in the immediate future. Given the pace of change of AI model development, the government's proposals may be unfit for their ultimate purpose by the time the government is ready to action them. Writers have recently made thousands of complaints following revelations that Meta appeared to have used the LibGen database, which contains over 7.5 million pirated books and 81m research papers, to train its AI models. In the US, there are over 30 current lawsuits on this issue, including against Anthropic over its allegedly unlawful use of copyrighted material to train its LLM Claude. The core debate may come to focus as much on compensation as on permission before the government decides how to proceed in the UK.
Christian Frank, Stephan Horn and Alexander Schmalenberger look at the new German government's plans for AI.
Germany's new coalition government (CDU, CSU, SPD) has unveiled its plans in the coalition agreement dated 9 April 2025, setting a bold course for the nation's digital future. A central ambition is to transform Germany into a leading 'AI Nation'. This goal aligns with strategic priorities recently announced at the EU level and signals a significant focus on Artificial Intelligence for the coming legislative period.
The agreement promises 'massive' investment in AI infrastructure to achieve this, indicating a clear commitment to building the foundational elements needed for AI development and deployment. While specific budget figures remain elusive, the intent is backed by concrete proposals like a 100,000 GPU programme (an 'AI Gigafactory', likely linked to EU initiatives) and the establishment of networked AI centres of excellence.
Several key AI-related themes dominate the agreement:
Beyond infrastructure, the coalition aims to actively promote German AI language models and foster innovation through AI 'real-world laboratories' (Reallabore), with a particular focus on enabling Small and Medium-sized Enterprises to adopt AI technologies. The connection between AI and robotics is also highlighted as an area for development. Furthermore, a dedicated strategy for 'Culture & AI' is planned, alongside the creation of an expert commission to examine the interplay between 'Competition and Artificial Intelligence'.
Recognising the profound impact of the EU AI Act, the coalition pledges a 'low-bureaucracy and innovation-friendly' approach to its national implementation. A central service point is planned to help companies, particularly SMEs, navigate the new rules. The agreement explicitly states the intention to use the AI Act's built-in reliefs for SMEs, such as technical assistance and regulatory sandboxes. The coalition aims to influence future adaptations of EU digital laws to keep pace with technological change, although this depends on broader EU processes. Crucially, the agreement also flags the need to examine whether existing liability rules require adjustment at the European level, specifically for AI applications.
The coalition sees significant potential for AI within the public sector, planning for its increased use to enhance efficiency in public administration and the justice system. Specific provisions also mention the possibility of using automated data analysis, including AI, by security authorities under defined conditions. To power these ambitions, attracting international AI talent – IT specialists and researchers – is identified as a key priority. Furthermore, a planned 'Innovation Freedom Act' aims to reduce bureaucracy in research, partly by facilitating easier data access, which is expected to benefit AI development.
The 2025 coalition agreement leaves no doubt about the new German government's strategic focus on AI. The 'AI Nation' objective is backed by significant planned investments in infrastructure and targeted initiatives across development, regulation, and application. The approach acknowledges the crucial European context, particularly the need to implement the AI Act effectively while fostering innovation. Ambitions are high but realising this vision will depend heavily on translating these plans into concrete actions, securing the necessary funding, and successfully navigating the complex interplay between national goals, EU-level regulation, and technological advancement. The stage is set for a concerted push towards AI leadership.
Debbie Heywood looks at the EC's plans for EU AI dominance – is it about to reform the AI Act?
On 9 April 2025, the European Commission (EC) published its AI Continent Action Plan which outlines actions to help achieve the EU's goal of becoming a global leader in AI. Key areas of focus are:
As part of this, the EC is running consultations (which close on 4 June 2025) on:
Meanwhile there have been rumours that the European Commission may consider simplifying the AI Act even though it's barely a year since it went onto the statute books. Commissioner Virkkunen told reporters that the Commission was considering whether some reporting obligations could be cut and would seek industry views where 'regulatory uncertainty is hindering the development and adoption of AI'. This would feed into a wider review towards the end of the year.
While there are no specific details on plans to change the AI Act, a spokesperson was quoted as saying 'nothing is excluded'. This is in line with the Commission's planned review of the burden of the new digital rules on businesses and its stated intention of streamlining obligations which was confirmed by the AI Office as including the AI Act, particularly as it impacts smaller businesses. Some fear that pressure from the Trump Administration may place undue emphasis on simplification at the expense of regulation, and large AI businesses have been lobbying extensively, not only on the subject of the AI Act but in relation to its accompanying codes, in particular, the draft GPAI Code of Practice, currently in its third draft expected to be presented in final form in May (see our article above for more on that and the GPAI Guidelines consultation).
Read a longer version of this article here.
Karl Cullinane and Jo Joyce look at two recent examples of data protection regulator scrutiny of AI model compliance with GDPR.
Data regulators are getting increasingly involved in issues relating to GDPR compliance when developing and using AI. Organisations aspiring to build cutting-edge AI systems frequently require large quantities of user-generated content (UGC) on which to train their models. UGC is often rich in personal data, and the GDPR places firm parameters on how, why and when personal data can be used.
The Irish Data Protection Commission (DPC) and Norway's privacy regulator, Datatilsynet (the latter in response to a complaint by NOYB), for example, are both looking at compliant AI development and deployment.
On 11 April 2025, the DPC launched an inquiry into X about its use of personal data to train its AI system, Grok. This follows the DPC's landmark application to the Irish High Court in August 2024, in which it sought a court order prohibiting X from processing personal data to train its LLM, the first of its kind brought by an EU lead supervisory authority, which resulted in X undertaking to suspend processing. The inquiry will focus on whether X provided sufficient notice to individuals whose posts were used, whether the company secured a lawful basis for processing and whether the scope of data collection respected the GDPR’s principle of data minimisation. The latter point is of particular interest as the principle of data minimisation appears to directly contradict the current maximalist approach to AI development embodied by social media platforms in general and X in particular.
Addressing an adjacent point, the privacy advocacy group None of Your Business (NOYB) recently made a complaint to Norway's Datatilsynet against OpenAI alleging ChatGPT violated the GDPR's data accuracy principle. NOYB’s complaint deals with the phenomenon of AI hallucinations, focusing on instances where AI systems produce highly specific but false, and potentially defamatory, statements about identifiable individuals. One notable incident involved a Norwegian user who discovered that ChatGPT had falsely generated a story portraying him as a convicted murderer of his children. NOYB contends that OpenAI's disclaimer about errors in ChatGPT's outputs is insufficient to mitigate the harm caused by hallucinations. The complaint emphasises the need to align GDPR requirements around data accuracy and the right to rectification by providing clear mechanisms to correct or erase incorrect data contained within or produced by AI systems. Developers often stress that hallucinations are inherent in emerging AI technology. However, regulators appear increasingly interested in how entities handle accuracy, redress, and accountability issues.
For organisations exploring AI, these developments underscore the need for compliance and governance at every stage of the AI lifecycle, including data collection, development, deployment and ongoing monitoring. While the outcomes of these two investigations are yet to be determined, it is clear that regulatory scrutiny of AI continues to intensify, requiring organisations to stay ahead of evolving regulations, guidance and best practice.
The Irish DPC’s inquiry and the NOYB complaint reflect the escalating tension between cutting-edge AI and existing data protection principles. EU leaders are divided on how best to balance AI innovation with the protection of fundamental rights. Although recent tensions between Washington and Brussels over tech regulation have tended to focus on the Digital Markets Act and AI Act, the GDPR may ultimately prove more influential in shaping the pace and direction of AI development within the EU. Disagreement over how developers can access and use data has led some EU leaders to advocate easing GDPR enforcement to spur progress. In contrast, others favour a stricter adherence to data protection principles supplemented by structured data-sharing measures - such as those envisioned under the Data Act and the Data Governance Act.
The ubiquity of AI technology and the pace of development mean that it is not just social media giants that face these regulatory challenges, and despite its willingness to pursue the likes of X, the Irish DPC offers no guidance on GDPR-compliant development and use of AI, aside from a single blog post from July 2024. If smaller organisations are to have a chance of their share of the AI revolution, data regulators will have to step up and offer meaningful support to compliant AI training and development across the EU.
Alexander Schmalenberger and David Klein look at Germany’s ambitions to embed AI into its defence strategy.
Germany's 'Zeitenwende', the historic shift in its security and defence policy initiated in 2022, is expected to enter a new phase under the new Federal Cabinet. While the commitment to bolster defence capabilities is ongoing, a significant intensification of focus on AI within the military is now anticipated, backed by expectations of further dedicated funding. Germany's aspiration to be a leading 'AI Nation' will be bolstered by embedding advanced AI in its military systems and in particular processes – from enhancing reconnaissance with AI-analysed drone footage in projects like 'Uranos KI', to exploring AI in naval vessels and developing the AI backbone for systems like the Future Combat Air System (FCAS). However, AI is mostly used in leadership processes to improve the quality and speed of decisions on the battlefield. Software-defined defence is likely to transform into AI-defined defence.
Strategic and funding decisions are expected in the coming months, with the Ministry for Economic Affairs and Climate Action (BMWK), the Ministry of Defence, and the new Ministry for Digitalisation likely to play key roles. Stakeholders need to understand the evolving legal and practical landscape.
The financial commitment remains robust. The EUR 100 billion special fund continues to support major acquisitions, and the reformed ‘debt brake’, exempting certain defence spending, provides a long-term fiscal framework.
As Germany advances its AI-driven military modernisation, key legal aspects require ongoing attention:
The EU AI Act is a significant piece of the regulatory puzzle. It exempts AI systems used exclusively for military purposes, with the emphasis on exclusive. Producers of dual-use AI (technology with both civilian and military applications) need to take into account that the AI Act's often stringent requirements will apply if their product has non-military as well as military uses. This impacts design, compliance, and market access for AI solutions not strictly confined to the military domain. AI technology not originally developed to serve military purposes but later used in military systems also faces legal hurdles even if the technology itself is ready to market. The EU Dual-Use Regulation also governs the export of such technologies.
Germany continues to advocate internationally for a ban on fully Lethal Autonomous Weapons Systems (LAWS), emphasising that any AI in weapon systems must comply with international humanitarian law. Defining and implementing 'meaningful human control' remains a potential practical challenge. So, producers who want to win a procurement contract need to comply, in particular, with the human oversight provisions in the AI Act and develop AI literacy materials for their products.
A new ecosystem of specialised AI defence companies is growing as the technology advances and funding increases. A prominent example is Munich-based Helsing, founded in 2021. It develops AI to digitally modernise existing military systems, such as upgrading the Eurofighter for electronic warfare and contributing to the FCAS AI infrastructure, backed by significant recent funding of EUR 209 million and strategic partnerships.
For businesses, particularly those in dual-use AI, this next phase of the 'Zeitenwende' presents opportunities but underscores the need to comply with the EU AI Act, where applicable, and other regulatory aspects. For both traditional weapon manufacturers and new tech players in the military field, proactive engagement with the developing legal and policy framework will be essential.
Solange Baris looks at the importance of ICT standardisation for AI Act compliance.
As 2025 rolled around, so did a new edition of the Rolling Plan for ICT standardisation (Rolling Plan): an annually updated working document published by the European Commission that merges various ICT standardisation needs and activities in support of European Union policies into a single document.
ICT standards play an important role in supporting harmonisation, interoperability, security, and efficient use of technologies within the EU. The European standardisation framework operates under the Regulation on European standardisation and is carried out through a public-private partnership with the European Standardisation Organisations (ESOs) and their members.
In some cases, the availability of standards can become a precondition for implementing EU policy or legislation. The EU's AI Act is particularly reliant on ICT standards to support its implementation. This year’s Rolling Plan has been updated to reflect progress on existing policy initiatives and highlight the latest developments in AI standardisation – placing particular emphasis on compliance with the AI Act.
Over recent years, ESOs such as CEN, CENELEC, and ETSI - along with international organisations like ISO and IEC - have intensified their efforts to develop detailed technical specifications through which organisations can achieve compliance with relevant legal requirements. Now that the AI Act has entered into force, several obligations have been introduced for providers and deployers of high-risk AI systems and general-purpose AI models. These include requirements for risk management, data quality and governance, transparency and user information, technical documentation, record-keeping, human oversight, and measures to ensure accuracy, robustness, and cyber security.
To operationalise these requirements, the European Commission issued its first standardisation request to CEN and CENELEC in May 2023, with an amended version expected in 2025 to align with the final text of the AI Act. In this respect, CEN and CENELEC, through their joint technical committee CEN-CENELEC JTC 21, are currently working on around 35 standardisation activities, focusing on key areas such as AI trustworthiness, risk management, quality management systems, and conformity assessment procedures. These standards are expected to provide “presumption of conformity,” so will serve as a critical compliance tool.
In parallel to the work carried out by CEN and CENELEC, ETSI is also actively engaging in developing and coordinating AI-related activities within the ICT domain through its Industry Specification Groups. These efforts are coordinated across multiple technical bodies by the Operational Coordination Group for AI (OCG AI), which also maintains continuous dialogue with CEN-CENELEC JTC 21 to support alignment of standards. ETSI’s AI-related initiatives span a wide range of technical areas, including human oversight and the explainability of AI systems, network optimisation through cognitive management architectures, AI security and resilience, and standardisation in machine learning. Together, these contributions from ESOs represent a concerted and multi-dimensional approach to preparing the standardisation landscape for the effective implementation of the AI Act.
While this year’s Rolling Plan may suggest that standards to support compliance with the AI Act are rapidly materialising, it is unlikely that the harmonised AI standards will be published before the majority of the Act’s obligations take effect in August 2026. According to CEN-CENELEC, the development of technical standards remains behind schedule, with much of the work expected to extend through 2025 and into 2026. Timely completion is critical to ensuring that companies are adequately prepared as the different elements of the AI Act start to apply.
Harmonised standards are the cornerstone of AI Act compliance. The Rolling Plan continues to serve as a vital roadmap for policymakers, businesses, and standardisation bodies, particularly in the area of AI standardisation. For organisations that provide or deploy high-risk AI systems or general-purpose AI models, remaining informed and actively engaged will be essential. Although the complete set of harmonised standards may not be finalised until 2026 or later, close monitoring of the ongoing work of ESOs such as CEN-CENELEC is strongly advised (for the status of standards developed by the CEN-CLC/JTC 21, see here. For the status on ETSI’s Work Programme, see here). Proactive preparation can help ensure a smoother path to compliance once the AI Act is fully in effect.
Jo Joyce and Hannah Garvey look at the types of AI which are now banned under the EU AI Act, and at the new AI literacy requirement.
Six months have passed since the EU's AI Act entered into force and, as of 2 February 2025, AI use cases that pose an "unacceptable risk" under the law are prohibited in the EU.
Although the practices that are entirely prohibited under the Act represent a small proportion of overall AI use cases, they reflect a broad range of common operations across industries and sectors. Article 5 of the AI Act calls out the following as being prohibited owing to the potential to cause significant harm:
See here for a one-page overview of prohibited AI under the AI Act.
While specific use cases are called out in the AI Act, there remains some uncertainty as to how the law should be interpreted at the margins. How, for example, should a material distortion in behaviour be distinguished from an immaterial one? In what circumstances might adverse treatment arising from social scoring be justified or reasonable? On 4 February, the EC published guidelines on prohibited AI practices which provide 140 pages of explanations and practical examples to help stakeholders understand and comply with Article 5 requirements.
In addition to the prohibition of banned use cases, an AI literacy requirement is also now in force. Providers and deployers of AI systems are required to take suitable measures to ensure that their staff and others engaged in the operation of their AI systems have a sufficient level of AI literacy. Organisations should consider the technical knowledge, experience, and prior training of staff, as well as the way AI systems are to be used when assessing AI literacy. View our AI literacy plan here.
Non-compliance with the AI Act may result in substantial fines. The Commission has the authority to impose fines for breaches of the Act, with amounts varying based on specific obligations violated, the maximum of which (up to €35 million or 7% of the company's annual worldwide turnover from the preceding financial year, whichever is higher) can be levied for non-compliance with the Article 5 obligations related to prohibited AI systems. Although the national competent authorities and the regime to enforce prohibited system bans will not be in force until 2 August 2025, organisations will be liable for any harm caused by prohibited systems and for a lack of AI literacy within their organisations from 2 February 2025 onwards.
Nicholas Crossland looks at what DeepSeek means in the broader context of AI development.
What if the next major leap in AI doesn’t cost billions - but millions? Recent headline-grabbing developments with DeepSeek's V3 and R1 models suggest we might already be there. These models don't just represent technical achievements; they signal a broader shift in how we build, scale, and think about AI.
V3 is an open-source large language model (LLM) reportedly trained for around less than $10m - a fraction of the cost associated with training models like GPT-4. If accurate, this is an architectural breakthrough, not just an efficiency gain. It suggests that smarter training techniques can achieve comparable performance at significantly lower costs.
Sceptics rightly question the figures provided, likely representing just the final training run. Even if the true cost is 10x higher, it still redefines what is possible. The core takeaway is that we may be entering an era where high-performance AI isn’t limited to tech giants with billion-dollar budgets.
While V3 showcases architectural efficiency, R1 highlights the enduring power of brute-force reinforcement learning (RL). Many people in the industry were focussed on RL as a path to superintelligence, because of both its intuitive parallel with the evolution of biological intelligence, and early breakthroughs in advanced AI using deep RL, like DeepMind's AlphaGo. The power of LLMs, capable of modelling reality through language alone, represented something of a departure from this framework.
R1 feels like vindication for RL. It demonstrates that RL, especially when applied to vertical problems like specific kinds of reasoning, can brute-force models into coherent 'thought' without the need for extensive supervised fine-tuning. This blend of language modelling and RL will likely accelerate us towards more agentic and autonomous AI systems.
A common assumption is that more efficient AI will reduce overall demand for computational resources. But this overlooks the idea that when the cost of a resource drops, its consumption often increases because more people can afford to use it, resulting in an opposite force on its price (the Jevons Paradox).
Breakthroughs like V3 and R1 may counterintuitively increase demand for compute. Lower costs democratise AI development, enabling startups, researchers, and even hobbyists to train models that were once the exclusive domain of tech giants. As more players enter the field, the total consumption of computational power grows, not just despite the lower cost to train a model but because of the lower cost.
Zooming out, these developments support a thesis that the future of the internet will be dominated by agentic AI - autonomous systems capable of performing complex tasks without direct human oversight. It is reasonable to speculate that AI agents will generate more internet traffic than humans within the next decade.
Until now, barriers like high compute costs and reliance on proprietary APIs have held agentic AI back. But cheaper training, RL-driven reasoning, and open source models remove these obstacles. We're not just improving AI - we're accelerating the emergence of AI agents as integral parts of the digital ecosystem (see more on agentic AI here).
V3 and R1 aren’t isolated milestones; they’re signals of a paradigm shift. As AI becomes cheaper, more accessible, and more autonomous, we’re not just scaling technology - we’re reshaping the very fabric of the internet.
Debbie Heywood looks at the outcome of the Paris AI Action Summit, particularly in light of the new US administration.
Much has changed since the inaugural AI Safety Summit at Bletchley Park last year. The EU passed its groundbreaking AI Act, and there is a new US Administration with a new attitude to AI safety, providing a different spin on the Paris AI Action Summit held on 10-11 February 2025.
One of President Trump's first acts in Office was to rescind a slew of Executive Orders. That included Executive Order 14110 on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence. Among other things, the EO required developers of frontier AI systems to share safety test results and other critical information with the US government.
On 23 January 2025, President Trump issued an Executive Order on Removing Barriers to American leadership in Artificial Intelligence. This provides for a new policy position to be set out within 180 days. It also gives federal agencies discretion to undo measures taken in response to the Biden EO. President Trump also announced a $500bn joint venture investment in AI infrastructure covering data centres and energy development projects.
Rescinding the Biden EO signals a focus by the Trump Administration on AI development rather than AI Safety. This approach is not necessarily out of alignment with those of other key countries attending the Paris AI summit, and indeed with the priorities of the summit itself. The agenda of the summit was heavily focused on AI opportunities with AI Safety issues taking up the minority of the time. This was notwithstanding the fact that on 2 February 2025, the AI Act literacy requirement and bans on certain 'unacceptable risk' AI came into application (as discussed here).
Just before the Paris Summit, the first International AI Safety Report led by Yoshua Bengio was published. It focuses on general-purpose AI and looks at capability, related risks, and risk mitigation. One thing the report highlights is that there is very little knowledge of what happens inside the 'black box', saying "despite rapid advances in capabilities, researchers currently cannot generate human-understandable accounts of how general-purpose AI arrives at outputs and decisions".
There were many initiatives, agreements and investment announcements made at the Paris Summit, and AI safety was certainly not forgotten, but perhaps the most notable thing to arise from it was the refusal of the USA and the UK to sign the declaration on open, inclusive, ethical and sustainable AI. At least 60 countries have signed it including France, Germany, China, India, Japan and Canada.
The USA's snub might have been expected with Vice President Vance taking aim at what he described as the EU's overly restrictive regulatory framework on AI, data and online safety. The US reportedly objected to the declaration's focus on multilateralism, inclusion, the environment and the emphasis on safe and ethical AI development. The UK's decision not to sign was more of a surprise and has led to accusations that it is currying favour with the USA. A government spokesperson said the government would only ever sign up to " initiatives that are in UK national interests" and the government later commented that the declaration was insufficiently clear on global governance and did not address national security. The government did, however, sign other agreements including on sustainability and cyber security and, of course, it plans to legislate on AI safety later this year.
Reactions to the UK's decision have been mixed with some arguing that the government was right to say the declaration did not go far enough on safety, and others offering a range of views from believing that the decision is damaging to the UK's AI aspirations to saying it is helpful in promoting the UK as a liberal market for AI development. One thing is certain - those hoping for global consensus on AI safety are unlikely to feel optimistic about the outcome of the Paris Summit.
Xuyang Zhu picks out some of the key issues from the UK's AI copyright consultation.
On 17 December 2024, the UK government published a consultation on copyright and AI.
The consultation is premised on the fact that rightsholders are finding it difficult to control use of their works in training AI models, and AI developers are finding it difficult to navigate copyright law in the AI training context. The government says this legal uncertainty is undermining investment in and adoption of AI technology.
The government proposes a new AI training exception that applies to lawfully accessed works and is subject to rightsholders' ability to reserve their rights (or "opt-out"), coupled with new transparency requirements for AI developers.
Acknowledging the uncertainties over effectiveness of opt-outs under the EU TDM exception, the government proposes that reservations should be made using "effective and accessible machine-readable formats, which should be standardised as far as possible". This suggests that unilateral written notices and website T&Cs will be insufficient, resolving a big area of debate under the EU exception. The consultation then seeks views on the technologies and standards that could be used for effective opt-outs.
The level of granularity for transparency reporting is still up in the air but could include requirements to disclose use of specific works and datasets as well as details of web crawlers used and evidence of compliance with rights reservations.
On models trained outside the UK, the government wants to encourage AI developers operating in the UK to comply with UK law on AI model training even if their model was trained in other countries. The government is therefore seeking views on what other measures could help establish a level playing field between providers of models trained inside and outside the UK, without putting forward any particular proposals on this point.
The consultation refers to arguments that the temporary copies exception can apply to AI training and notes that it is not clear whether this is the case. It seeks views on whether clarification is required. This exception, if it applies, could allow AI developers to get around opt outs for the new AI training exception. The consultation also seeks views on whether the existing TDM exception for non-commercial research remains fit for purpose.
The consultation also deals with rights in AI outputs. It confirms AI-assisted outputs made by a human creator using an AI tool, and "entrepreneurial" works such as sound recordings and films, can be protected by copyright. It seeks views on whether the provision for ownership of "computer generated works" in the CDPA is actually helpful in incentivising the development and adoption of AI, in which case the government might consider reforming the provision, but proposes removing specific protection for computer generated works if there is insufficient evidence that this protection has positive effects. Removing this provision would also remove the main hook for arguing that AI-generated (as opposed to merely assisted) literary, artistic, dramatic and musical content is protected by copyright in the UK.
The consultation also addresses other matters connected to AI outputs, including liability for infringing outputs, labelling of AI-generated outputs, and digital replicas.
In a joint statement published on 19 December 2024, creative industry bodies voiced concerns over the proposals. Attempts are also being made in the House of Lords to introduce a requirement on AI developers with a connection to the UK to comply with UK intellectual property law and disclose how they obtain training data, as part of the Data (Use and Access) Bill. AI developers on the other hand have, as might be expected, been more positive about the government's proposals. However, both creative industries and AI developers have an interest in ensuring that any opt-out process is straightforward and workable to implement. The consultation closes on 25 February 2025. It remains to be seen what the outcome will be.
Harry Ruffell looks at the impact of US regime change on investment in AI in the UK and Europe.
The return of Donald Trump to the US Presidency could have far-reaching implications for AI investment in the UK and Europe. President Trump's policies, both domestically and internationally, tend to be characterised by protectionism, deregulation, and a focus on American interests. This is likely to affect AI investment across the Atlantic even if it's not yet clear exactly what the impact will be.
The AI sector relies heavily on global supply chains for hardware components like semiconductors. Increased tariffs or trade barriers could disrupt these supply chains, leading to higher costs and potential delays in AI development projects. An escalating trade war between the US and China will likely affect the critical semiconductor supply chain which could raise costs as companies within these value chains will need to invest domestically to maintain reliable supply.
However, the release of DeepSeek could have a significant effect on the global semiconductor supply chain in the opposite way. If DeepSeek's model is widely adopted in the market (and if it really does use significantly less hardware and energy), a large part of the current worldwide AI infrastructure could end up in excess capacity causing prices to fall. This may also spur innovation among start-ups and smaller companies, attracting more investment in the market (read more about DeepSeek here).
President Trump's deregulatory approach contrasts sharply with the EU's stringent regulations on AI safety, product liability and data privacy. If the Trump administration takes a more relaxed approach compared to the EU's rigorous ethical standards surrounding AI deployment, this disparity could create friction for cross-border collaborations and make the US a more attractive investment prospect with fewer constraints on innovation. It remains to be seen where the UK lands on the AI safety spectrum with legislative plans not due to be published before the Spring.
President Trump's immigration policies could also be a double-edged sword for European AI investment.
On the one hand, stringent immigration controls proposed by Trump could restrict the mobility of researchers and professionals between Europe and the US, potentially hindering collaborative efforts vital for advancing AI technologies.
Conversely, stricter US immigration policies (such as restrictions on H-1B visas and work permits) might benefit Europe if highly skilled individuals avoid working in the US, with the UK and EU being the next most attractive destination for work and research and often streamlining visa processes for tech professionals.
Strained relations between the EU and US under the Trump administration could lead to reduced cooperation in technology transfer, joint research projects, and funding initiatives, all of which are crucial to the AI industry.
President Trump's hardline stance against China’s technological advancements might also pressure the UK and EU into aligning more closely with US policies against Chinese tech firms, potentially impacting existing partnerships between European entities and Chinese companies in the AI domain.
Government funding plays a crucial role in fostering AI innovation. President Trump has already announced $500bn of private investment in AI infrastructure, but prioritising American companies for federal grants or subsidies could limit opportunities for European firms looking to collaborate with or receive funding from US-based entities.
Enhanced support for domestic R&D under Trump’s administration might, however, spur similar initiatives within Europe as nations strive not to fall behind technologically.
It's too soon to tell exactly what impact the Trump administration will have on AI development and whether DeepSeek proves to be a significant disruptor but both will have far reaching implications for the industry.
Debbie Heywood looks at the UK government's progress on its commercial and legislative AI agenda.
Matt Clifford's AI Opportunities Action Plan and the government's response to it were published on 13 January 2025. The Action Plan makes 50 recommendations, all of which the government says it will take forward with two slight caveats.
As the government itself points out, the Plan focuses less on AI safety issues and more on leveraging AI to help with productivity and growth, and to deliver more efficient public services at lower cost. This is alongside considering infrastructure needs, upskilling and attracting top AI talent, enhancing public trust in AI, reducing barriers to uptake, while still considering safety, governance and environmental issues.
Highlighted ambitions include:
The government has said it will continue to develop its policy response to the Action Plan as part of the broader work ahead of the Spring 2025 Spending Review. It will further set out its wider approach to AI in the Industrial Strategy's Digital and Technologies Sector Plan which will be driven by the newly created AI Opportunities Unit in DSIT. Matt Clifford has been appointed AI Opportunities Adviser to the Prime Minister and work will now begin on the recommendations made in the Plan with deliverables starting from Spring 2025 and going all the way to 2030.
What this really means is that much of the detail as to how the 50 recommendations will be implemented will follow over the course of the year and not before Spring.
The government has also said it will publish AI legislation "shortly", having announced its intention to do so in the July 2024 King's Speech, although it now appears a consultation will not be published until the Spring. It's unclear what exactly this will cover. In the King's Speech, it was said it would help ensure the safe development and use of AI models, and it looks set to focus on frontier AI, but it may also cover copyright, access to data and public sector use of AI. Certainly, there is much more to come from the government on AI this year.
Read a more detailed version of this article here.
Benjamin Znaty examines how AI agents are expected to reshape all industries and the legal challenges they will bring.
If 2024 was marked by the explosive growth of generative AI, 2025 will undoubtedly witness the rise of agentic AI. These autonomous systems are set to revolutionise various sectors, and Nvidia’s CEO recently described AI agents as a “multitrillion-dollar opportunity” emphasising the magnitude of the transformation ahead. According to McKinsey, while these systems have existed for years, the breakthrough in generative AI through the advancement of large language models (LLMs) is now accelerating the development of agentic AI.
Agentic AI can be broadly defined as AI systems capable of executing complex tasks and workflows with minimal human intervention. As such, it differs from generative AI, which still primarily focuses on content creation rather than operational performance. Another key distinction lies in the configuration: generative AI responds to specific prompts entered by a user to generate content, whereas AI agents operate autonomously, continuously learning and making decisions based on contextual inputs gathered from trusted data sources rather than human prompting.
The productivity gains are evident: AI agents are expected to enable companies to develop sophisticated workflows that previously required significant human resources. Their impact is already tangible across multiple sectors. In customer service, AI agents have far surpassed traditional chatbots, handling entire interactions, adapting responses dynamically, and enhancing customer engagement. In software development, autonomous coding assistants are writing and debugging software with remarkable efficiency, significantly reducing the need for human oversight. Google is also said to be developing autonomous AI agents capable of handling even the most complex human tasks in software and product development, in interpreting regulations and policies. All businesses across industries - ranging from highly regulated fields like finance and healthcare to supply chain logistics and professional services, including law firms - are expected to rely increasingly on AI agents in the coming years.
While these advances signal a paradigm shift in automation, they also introduce significant legal concerns. AI agents present challenges in terms of liability and compliance, as existing legal frameworks - built around human agency, product liability and corporate accountability - already struggle to accommodate autonomous systems. These issues are not new and have been on the minds of legal professionals for years, particularly in relation to earlier technological developments such as the Internet of Things. However, the exponential risks associated with agentic AI in these areas should not be underestimated. These legal questions are likely to become more complex and urgent than ever before with the rise of agentic AI.
While most of the requirements introduced in the EU AI Act - particularly regarding transparency, risk management, and human oversight - will clearly address some risks posed by AI agents, the purpose of the Act is not to provide clear provisions on liability rules for autonomous decision-making.
As AI agents evolve, regulations will need to be updated to directly address critical legal concepts such as agency and product liability. The Product Liability Directive has recently been revised and now extends the definition of the products it covers to software, including AI systems. The so-called AI Liability Directive is also intended to help in this process, but while the proposal is supported by consumer protection associations and some members of the European Parliament, it is also facing significant criticism from various stakeholders, making the adoption process slower.
The rise of AI agents is a major opportunity across all industries but compels legal professionals to reassess fundamental legal principles. The challenge for regulators and legal experts in 2025 will be to ensure that innovation does not outpace the law.
ECIJA's Carlos Rivadulla Oliva and Paula Klimowitz Gumpert look at the Spanish AI Sandbox – how to participate, and what it aims to achieve.
The Spanish government has initiated the application process for its first AI Sandbox, a testing environment designed to facilitate compliance with Regulation (EU) 2024/1689 – the EU's AI Act. This tool will enable participants to implement and refine high-risk AI systems by developing technical guidelines and best practices under regulatory supervision. It is overseen by the State Secretariat for Digitalisation and Artificial Intelligence (SEDIA). The primary objective of this initiative is to promote AI innovation by establishing a controlled testing environment.
The access to the AI Sandbox is limited to 12 high-risk AI systems, which will be selected based on specific criteria, including the degree of innovation or technological complexity; the degree of social, business or public interest; the degree of explainability and transparency of the algorithm embedded in the AI system; the alignment of the entity(ies) with the Spanish Government's Digital Rights Charter; the level of maturity of the AI system; the quality of the technical report submitted and the typology of entities. These criteria are set in accordance with article 8 of the Royal Decree 817/2023, of November 8 (Real Decreto 817/2023, de 8 de noviembre), ensuring fairness and diversity of systems, in compliance with the criterion of varied representativeness of high-risk systems. The minimum thresholds that applicants must exceed for their high-risk AI system to be accepted are 50% of the maximum score for each criterion on the degree of innovation and the degree of social, business or public interest, as well as on the total score of all criteria.
The applicant must fulfil certain conditions, such as being resident in Spain, owning the intellectual property rights to the high-risk AI system, complying with applicable regulations, being introduced in the market or put into service and complying with data protection legislation if processing personal data.
The process will consist of two stages: one for the implementation of requirements and the other for post-launch monitoring, with an estimated duration of 12 months. The first stage will be developed through technical guidelines and specifications, as well as expert advice provided by SEDIA. Participants will have to analyse the obligations applying to their high-risk AI systems and will have to develop a compliance plan. They will also have to submit a declaration of compliance with the requirements. In the second stage, a post-launch monitoring plan will be developed, and communication channels will be set up for queries and incident notifications.
Nicholas Crossland looks at the ICO's position on generative AI and data protection and at what this means for AI developers and other stakeholders.
In December 2024, the UK Information Commissioner’s Office (ICO) published a response to its five-part consultation series on generative AI and data protection. This addresses how key principles of the UK GDPR and the DPA 2018 apply to the development and deployment of generative AI systems.
The ICO launched its consultation series in January 2024, aiming to clarify regulatory expectations around development and use of generative AI, amid growing uncertainty, particularly concerning the lawful processing of personal data, purpose limitation, data accuracy, individual rights, and the allocation of controllership across AI supply chains.
Lawful basis for web scraping
The ICO reaffirms that legitimate interests remain the most viable lawful basis for processing personal data collected through web scraping to train generative AI models. However, this is contingent on meeting the stringent requirements of the usual legitimate interests three-part test (purpose, necessity and balancing).
The ICO expects significant improvements in transparency, including clear information about what personal data is collected and how it is processed. This is particularly important for the balancing element. Given the potentially high-risk nature of invisible processing, the ICO says developers face challenges in justifying that their interests outweigh individuals' rights in the absence of adequate transparency. However, even with organisational transparency improvements, issues around model explainability and interpretation are likely to remain.
Purpose limitation
The ICO emphasises that different stages in the generative AI lifecycle - training, fine-tuning, and deployment - constitute distinct purposes. Data controllers must explicitly define these purposes and assess the compatibility of any secondary data uses with the original purpose of collection. Broad, undefined purposes such as simply “developing a model” are insufficient under UK GDPR.
Accuracy of training data and model outputs
The ICO clarifies that developers are responsible for ensuring the accuracy of personal data used in training datasets. While verifying the factual accuracy of large datasets is practically very difficult, this does not absolve developers from accountability. Moreover, the statistical accuracy of AI-generated outputs should be proportionate to their intended use. Developers are encouraged to adopt transparency measures, such as labelling outputs and providing information on reliability and limitations.
Privacy by design
The ICO insists that organisations must design generative AI systems with data subject rights in mind from the outset. This includes facilitating rights such as access, rectification, and erasure, even where technical barriers exist. The ICO expresses concern over the lack of effective mechanisms to support these rights in current generative AI models, particularly regarding data derived from web scraping. Reliance on Article 11 (processing without identification) requires a high level of justification, and controllers must provide avenues for individuals to identify their data where feasible.
Allocating controllership across the AI supply chain
The ICO provides clarity on controllership in generative AI. It reiterates that contractual arrangements alone do not determine data protection roles; instead, actual influence over the purposes and means of processing is decisive. In many “closed-access” models, joint controllership between developers and deployers is likely, given their shared influence over data processing decisions. The ICO rejects the notion that developers can universally claim processor status for downstream data uses, particularly when they retain significant control over model architecture and deployment conditions.
The ICO’s response is helpful for understanding its priorities and assessing enforcement risk over the next few years. Its insistence on transparency, accountability, and data subject rights reflects a strong commitment to upholding data protection principles amid rapid technological change. Notably, the clarification that legitimate interests are the only feasible lawful basis for web scraping - subject to strict compliance tests - places considerable responsibility on AI developers to justify their data practices.
While the ICO acknowledges the technical challenges in areas like data accuracy and machine unlearning, it does not offer blanket exemptions. It will be up to the generative AI industries to adapt to these challenges, meaning that the consultation response arguably encourages innovation within clear boundaries. In our view, practical implementation around facilitating individual rights in large-scale AI models remains the greatest unsolved problem for privacy in generative AI.
By reinforcing the applicability of UK GDPR principles to emerging technologies, the ICO, as expected, makes clear that innovation must not come at the expense of fundamental rights.
Kira Raguse, Carla Nelles, Benedikt Kohn and Susan Hillert look at what the EDPB’s Opinion on the use of personal data for the development and deployment of AI models means for businesses.
The European Data Protection Board (EDPB), which harmonises GDPR practices across EU Member States, issued Opinion 28/2024 on 18 December 2024, providing guidance on GDPR compliance regarding AI development and use. The Opinion was published in response to an Article 64(2) GDPR request by the Irish Data Protection Commissioner. Key areas addressed include anonymity of AI models, legitimate interest as a legal basis, and the consequences of unlawful data processing.
The EDPB highlights that AI models trained on personal data are not inherently anonymous and should be assessed on a case-by-case basis. Risks like Membership Inference Attacks (determining whether specific data was in the training set) or Reconstruction Attacks (rebuilding input data from the model) underscore this concern. For example, studies have shown AI models unintentionally revealing sensitive healthcare data.
To ensure anonymity, controllers must apply robust techniques like differential privacy, which safeguards data by adding controlled noise to prevent re-identification. Supervisory authorities will evaluate whether these methods meet GDPR requirements.
The EDPB acknowledges legitimate interest under Article 6(1)(f) GDPR as a valid basis for AI-related data processing, provided a three-step test is met:
Unlawful use of personal data to train AI can have cascading effects:
The Opinion does not address topics like special category data (Article 9 GDPR) or automated decision-making (Article 22 GDPR). It is also vague on the issue of whether use of a model which has been trained using unlawfully processed data is, itself, unlawful, saying only that it may be. These issues may not have been dealt with owing to the eight-week time limit for responding to Article 64(2) requests and may be covered in greater detail in future.
Debbie Heywood looks at UK policy announcements and rumours around AI legislation since the July 2024 general election.
New AI legislation was announced by the government in the King's Speech of 17 July 2024. The aim of the legislation is to "seek to establish the most appropriate legislation to place requirements on those working to develop the most powerful AI models”. Curiously, there was no elaboration on what the legislation might cover in the background briefing notes to the speech although it seems clear that any proposed legislation would be far less comprehensive than the EU's AI Act and there is widespread agreement that it will focus on safety of frontier systems.
At the Labour Party conference in September 2024, AI Minister Feryal Clark hinted legislation could well go further, saying she was "in the process of bringing forward legislation" intended to clarify the use of copyrighted materials to train AI, suggesting a consultation would take place as early as October. She has since clarified her remarks saying instead that the government is conducting a series of round tables with stakeholders to try and resolve copyright disputes between British AI companies and creatives. Speaking at The Times Tech Summit, Clark suggested an agreement could come by the end of the year and that it might take the form of an amendment to existing laws or entirely new legislation. Transparency and the right to opt out of having copyrighted materials used to train AI models are expected to be a focus of the discussions but there has also been talk of introducing an extended TDM (text and data mining) exemption similar to the one in the EU Copyright Directive – an initiative previously rejected by the then UK government in 2023 – to cover TDM for commercial purposes under certain circumstances (see here for more on this issue). Another area in which there are mixed messages is whether or not the AI Office will be put on a statutory footing.
Whatever the AI legislation contains, it will be a departure from the previous government's policy as stated in its White Paper on AI, published in August 2023, which concluded there was no need for AI-specific legislation. Just before the 2024 general election, there were, however, rumours that the Conservative government was working on AI legislation which was widely expected to make mandatory the currently voluntary commitments by leading developers of large language models/general purpose AI to submit algorithms to a safety assessment process. There were also suggestions then that copyright legislation would be amended to allow organisations and individuals to opt out of allowing LLMs to scrape their content.
It initially seemed likely that any planned legislation would not cover the public sector which may explain why Lord Clement-Jones introduced a Private Members' Bill on AI to the House of Lords on 9 September. It relates to mitigating the risks of AI use by public authorities with a focus on potential bias and automated decision making. It would require public authorities to take certain protective measures including around impact assessments, transparency, log maintenance and retention, and explainability. It would also provide for the setting up of an independent dispute resolution mechanism for allegedly unfair or disputed automated decisions. The Ada Lovelace Institute said in September that local authorities are struggling to navigate the 16 pieces of legislation and guidance which cover the use of AI in public procurement so they might indeed welcome legislation in this space and lately, there have been suggestions that public sector could be in scope of the upcoming legislative proposal.
On 15 October 2024, the UK government published a Green Paper, Invest 2035 – a Modern Industrial Strategy for consultation. As you might expect, AI is mentioned several times, mostly as an opportunity for strengthening the UK's position in sectors such as life sciences, digital and technologies, data-driven businesses and defence. The Strategy also refers to the AI Opportunities Action Plan led by Matt Clifford and launched in July 2024, which will propose an "ambitious plan to grow the AI sector and drive responsible adoption across the economy". The government is widely expected to publish its AI Plan in November, potentially alongside a consultation on new legislation.
ECIJA's Carlos Rivadulla Oliva looks at the EU's progress on regulating AI and at how to prepare for compliance, covering the AI Act, the AI Pact and the AI Liability Directive.
With the conclusion of the EU's AI Act which came into force on 1 August 2024, the EU is at the forefront of regulating artificial intelligence. Businesses operating in the EU must brace themselves for the gradual implementation of all the requirements and obligations under the AI Act which will apply to a greater or lesser degree to all operators in the AI value chain.
Central to the preparation process is the EU AI Pact, also announced on 1 August 2024. This is a non-legislative, voluntary commitment by companies to comply with the principles and future obligations laid out in the AI Act ahead of provisions becoming applicable. This Pact serves as both a soft-landing for businesses to test compliance and as a political move to engage stakeholders early.
The EU AI Pact is significant because it allows businesses to get ahead of the compliance curve. It emphasises collaboration between public and private sectors to address the risks posed by AI technologies. Signatories commit to the ethical use of AI, focusing on ensuring that AI systems are lawful, transparent, and accountable, reflecting the risk-based approach of the AI Act. Although voluntary, participating in the AI Pact sends a strong message of corporate responsibility and readiness for the incoming obligations under the AI Act. On 25 September 2024, the European Commission announced that over 100 companies had signed up including Amazon, Google and Microsoft.
AI transparency as a key compliance priority
Among the many obligations that companies will face under the AI Act, one stands out as particularly critical: AI transparency. The AI Act divides AI systems into categories based on their risk profiles, with “high-risk” systems subject to the strictest requirements. One of these is the demand for transparency, which means that operators of high-risk AI systems must provide clear information about how their systems function and make decisions.
Transparency is essential for building trust in AI systems and ensuring accountability. The transparency requirements under the AI Act are multifaceted. First, users must be informed when they are interacting with an AI system rather than a human, especially in cases involving automated decision-making. Second, companies must be able to explain, in layperson’s terms, how the AI system operates, particularly how it processes data and arrives at specific outcomes.
The complexity of many AI systems poses a challenge, particularly in the context of advanced machine learning models like neural networks. Organisations must prioritise not only understanding the technical workings of their AI but also translating these mechanisms into clear and comprehensible terms for regulators, users, and stakeholders. Compliance with transparency requirements will also likely involve documentation and regular audits of AI systems to ensure they are functioning as intended and are aligned with the principles of fairness, accountability, and non-discrimination.
Update on progress of the AI Liability Directive
In tandem with the AI Act, the AI Liability Directive (AILD) is intended to play a crucial role in harmonising the legal landscape for AI across the EU. The AILD is designed to establish clear rules regarding liability for damage caused by AI systems. It focuses on facilitating claims for those harmed by AI, making it easier to prove causality and liability in cases involving complex AI systems.
The European Parliament and EU Council agreed the text of the AILD in December 2023, nearly a year ago, but there are suggestions that progress has stalled and the current version may yet be significantly amended or withdrawn altogether.
The European Parliament's JURI committee is expected to make a decision shortly as to whether or not to proceed with the Directive as it stands following an impact assessment by the European Parliamentary Research Committee, published in September 2024, which called for changes amidst concerns that the AILD overlapped too much with the AI Act and the recently agreed revised Product Liability Directive. The Research Committee's recommendations include that this legislation should be a Regulation rather than a Directive, that the focus be more on software liability with the scope extended to non-AI software in order to align with the revised Product Liability Directive, and that there should be extensions to certain areas of liability and damages claims.
As the legislative landscape continues to evolve, organisations must stay agile and informed, actively preparing for both the AI Act and, potentially the AI Liability Directive, to mitigate risks and capitalise on the benefits of compliant AI innovation.
Benjamin Znaty looks at what's really behind the current trend of delaying AI product releases in the EU.
Several major tech companies have recently postponed the release of new AI features and services in the EU. In almost all cases, the press has cited the legal challenges these companies face in ensuring compliance with the latest EU regulations before launching their AI innovations. But could there be more strategic reasons at play?
Apple’s decision to delay the release of its 'Apple Intelligence' AI features in France and across the EU was attributed to "regulatory uncertainties" stemming from the Digital Markets Act (DMA), in an article published by The Verge. These AI capabilities will be rolled out gradually worldwide, with EU countries being among the last to gain access. Apple reportedly had concerns about the DMA's interoperability requirements, which could force the company to open its ecosystem. While Apple is said to be working with the European Commission to ensure these features are introduced without compromising user safety, the actual link between delaying the launch of Apple Intelligence in Europe and addressing these concerns remains unclear.
This decision to delay the launch of AI capabilities in the EU is by no means unprecedented. In early October 2024, OpenAI introduced its highly anticipated 'ChatGPT Advanced Voice Mode' in the UK but chose not to release it in EU countries. Reports indicate that OpenAI attributed this decision to having to comply with EU regulations, specifically the EU AI Act. The press highlighted Article 5 of the EU AI Act, which prohibits the use of AI systems for inferring emotions, however, Article 5 only applies to the use of this type of AI within "areas of workplace and educational institutions," leaving the connection between Article 5 of the AI Act and this new ChatGPT feature somewhat ambiguous. Perhaps for this reason, in an October 22nd tweet, OpenAI did, finally announce its decision to rollout the feature across the EU.
The GDPR is also regularly cited as potential a stumbling block to AI development in the EU. In June 2024, Meta held its developer conference where it announced upgrades to its Llama AI product would not be possible for the time being in Europe. In a public statement, Meta explicitly stated that its delay was related to GDPR compliance issues, particularly in light of scrutiny from the Irish Data Protection Commission (DPC). According to Meta, requests made by the DPC hindered the training of its large language model, which relies on public content shared on Facebook and Instagram. While Meta has made the pause of its use of EU data to train its AI model permanent in the EU, it has resumed these processing activities in the UK, where the ICO continues to maintain a watching brief but has not so far required Meta to cease the processing.
This was not the first time Meta has run into regulator scrutiny over its use of AI. Three years ago, it announced it would cease using facial recognition technology for tagging purposes on Facebook in light of privacy concerns. On 21 October 2024, however, it said it was planning to start using facial recognition again to verify user identity, help recover hacked accounts and detect and block some types of scam ads. Interestingly, Meta said it would not be testing facial recognition for identity verification purposes in the EU, UK and in the US states of Texas and Illinois, jurisdictions in which Meta is continuing to have conversations there with regulators. Meta’s vice president for content policy is reported to have said that the “European regulatory environment can sometimes slow down the launch of safety and integrity tools like this. We want to make sure we get it right in those jurisdictions".
Whichever EU regulatory framework is cited in the above cases - the DMA for Apple, the AI Act for OpenAI, or the GDPR for Meta -the outcome is that EU consumers may experience short-term delays in accessing innovative AI technologies. Looking at the longer term prospects though, these regulatory frameworks arguably present an opportunity for tech businesses. While it's true that businesses may need to postpone releases of new AI technologies and features, as Meta has indicated, these organisations will be working to ensure that their products meet EU regulatory requirements while also preserving their commitment to user privacy and data security in a complex regulatory landscape. Creating customer trust will be fundamental to take up so taking the time to get it right may actually increase profitability which, in turn, will further fund innovation.
Whether or not the EU's approach to regulation leads to enhanced consumers protections at the expense of technological progress in Europe is yet to be determined, but it’s important to recognise the ongoing interaction between big tech corporate strategies and regulatory oversight when launching AI capabilities in Europe.
Gregor Schmid looks at the implications of the Hamburg Court's decision on the Text and Data Mining copyright exemption's applicability to training generative AI in the EU.
In a decision of 27 September 2024, the Hamburg Regional Court dismissed the lawsuit of a photographer against LAION, the provider of the LAION-5B image-text dataset. The main reasons for the decision are based on the copyright exception for Text and Data Mining (TDM) for purposes of scientific research, but the decision also addresses a number of other issues, such as the applicability of the TDM exceptions to the training of generative Artificial Intelligence, the requirements for declaring a reservation of rights according to the general TDM exception, and the conditions of “machine readability”. The decision has recently been appealed and the case will now be heard by the Hamburg Higher Regional Court.
The facts
LAION offers the LAION-5B image-text dataset, which can be used to train large image-text models, such as Stable Diffusion. The plaintiff (a stock photographer) claimed that LAION unlawfully downloaded a photograph created by him for the purposes of creating AI training datasets and demanded a cease and desist order against the allegedly unlawful download. The dataset contains hyperlinks to publicly accessible images or image files on the internet as well as further information about the corresponding images, including an image description that provides information about the content of the image in text form. The dataset comprises 5.85 billion corresponding image-text pairs. LAION extracted the URLs to the images from this data set and downloaded the images from their respective storage locations, then used software to check the images to see whether the description of the image content already in the existing data set actually matched the content to be seen in the image. The website from which the image was downloaded contained terms and conditions that prohibited among other things the use of automated programs to access the website or any content on it by way of downloading, indexing, scraping or caching any content on the website.
The decision
The Court rejected the plaintiff’s claims, as the use was covered by the Copyright exception for text and data mining "for the purposes of scientific research" (Article 3 of the DSM Copyright Directive as implemented in German law). This exception does not allow rightsholders to opt out. The intended use qualified as “text and data mining” as defined by the law (i.e. the automated analytical technique “aimed at analysing text and data in digital form in order to generate information which includes but is not limited to patterns, trends and correlations”). The Court did not see any evidence that LAION cooperated with a (commercial) third party undertaking having decisive influence on it, and having preferential access on the search results, which would have excluded the exception. The Court expressly only decided on the legality of the download, and not on the question of the (subsequent) training of generative AI, which was not part of the claim brought.
Although further reasoning was not strictly necessary, the Court, in an obiter dictum, also gave an initial assessment on the applicability and interpretation of the “general” TDM exception (Article 4 of the DSM Directive as implemented in German law). As such, the Court accepted that LAION’s use generally qualified as text and data mining. Moreover, the Court tended to the view that the TDM exception not only covered data analysis, but, with reference to Article 53(1)(c) AI Act, also the creation of datasets for the subsequent training of generative AI. However, there likely would have been a valid opt-out declared in the terms and conditions of the website that distributed the plaintiff’s photographs. Although the opt-out had not been made by way of a programmed exclusion protocol (such as robot.txt), but in 'natural' language, the Court tended to the view that such reservation was sufficiently explicit and specific. The opt-out could also be declared by a non-exclusive licensee of the rightsholder. In addition, such reservation also likely satisfied the requirements for “machine readability” for content made available online, as there were likely state-of-the-art technologies (as mentioned in Article 53(1)(c) AI Act) available to understand natural language reservations.
What does this mean for you?
The Court’s decision is the first judgement of an EU court addressing the interpretation of the TDM exception. Although the judgement is now subject to review by the appeal court, and although there is no rule of binding precedent in German law, the decision will very likely be taken into account by other courts in Germany and possibly beyond, as it addresses a number of controversial questions at the intersection of copyright and AI. The scientific community will likely welcome the judgement, as it sheds some light on the scope of the TDM exception for scientific purposes under the DSM Directive. It is also noteworthy that the Court saw the TDM exception as generally broad enough to include the training of generative AI. As regards the general TDM exception that also covers other commercial purposes, the discussion of what qualifies as an expressly stated and “machine readable” opt-out will stay high on the agenda.
Paolo Palmigiano looks at the evolving approach of competition authorities to the AI sector in light of recent developments.
The rapid evolution of foundation models and GenAI has recently become the focus of competition authorities.
Policy update
Most competition authorities, especially the UK’s Competition and Markets Authority (CMA) and the European Commission, are trying to get a better understanding of the competition issues that AI raises and how to address them
In September 2024, the EC released a policy brief addressing competition in GenAI and virtual worlds. And on 16 October it launched a tender for a study on how AI would impact the Digital Markets Act that regulates Big Tech. In April, the CMA published the outcome of its review of foundation models. And in July, the US authorities, the CMA and the EC published a joint statement on competition on GenAI.
Competition concerns
Most authorities agree on the competition concerns: foundation models require a lot of data, a lot of computing power, substantial investments and highly skilled specialists. Big Tech companies have an advantage in all those areas and can gain a significant advantage that could distort competition in AI.
Recent merger cases
Most mergers and acquisitions in AI do not fulfil the merger thresholds in the EU and UK and therefore do not get examined by the authorities. The UK, for example, is introducing a new merger control threshold in the new year that could capture some of these transactions (one party has £350m turnover in the UK and a 33% market share in any market, and the target has a link to the UK – even if no revenues). And the EU is considering possible changes to the Merger Regulation. A few years ago, it had reinterpreted a provision of the Regulation (article 22 EUMR) giving itself the power to review mergers below the EU thresholds, but such interpretation has been quashed by the European Court of Justice so the Commission now has to rethink. Some authorities are suggesting using the value of the transaction rather than turnover to capture these transactions under merger control rules, as Austria and Germany have done. But today, more than acquisitions, partnerships between Big Tech and small AI start-ups are becoming prevalent. Partnership agreements tend not to fulfil the criteria for merger review and we have a few recent examples, notably Microsoft/OpenAI, Microsoft/Mistral AI, Amazon/Anthropic.
Acqui-hires
People are a key asset in AI. We are starting to see large companies buying smaller companies for the people they employ or, in an effort to avoid merger filings, acquire just the people and enter into an agreement with the start-up. An example is Microsoft's announcement in March 2024 that it had hired several former Inflection AI employees, which amounted to almost all of Inflection AI’s team, including two of its co-founders. In addition, Microsoft also entered into a series of arrangements with Inflection AI including, among others, a non-exclusive licensing deal to utilise Inflection AI IP in a range of ways. The CMA, with its flexible merger test, took jurisdiction and reviewed it as a merger transaction but it was cleared as the CMA considered that it did not lead to a substantial lessening of competition. The EC tried to get jurisdiction but had to accept that the transaction did not fulfil the test under EU rules.
What next?
In the next few years, we will see competition authorities trying to deal with the competition issue AI raises as well as reconsider their powers in order to have the ability to review these transactions. The learning from the growth of the tech sector, where competition intervention was not immediate and arguably too late when it did happen is something the competition authorities are well aware of. They are keen to avoid an equivalent scenario when it comes to AI businesses.
Séverine Bouvy looks at the latest Belgian DPA guidance on data and AI which focuses on AI system development.
In September 2024, the Belgian Data Protection Authority (BDPA) published an information brochure on AI systems and the GDPR outlining the interplay between the GDPR and the AI Act in the context of AI system development (Guidance).
The Guidance first outlines the criteria to be met to qualify as an AI system under the AI Act:
In some cases, AI systems can also learn from data and adapt over time. Examples of AI systems in daily life include spam filters in emails, recommender systems on streaming services, virtual assistants, and AI-powered medical imaging tools.
The Guidance goes on to tackle the application of the GDPR and the AI Act requirements to AI systems, emphasising how these two pieces of legislation complement and reinforce each other:
Lawful, fair, and transparent processing
The six legal bases under the GDPR remain the same under the AI Act. In addition, the AI Act introduces a prohibition of specific types of high-risk AI systems such as social scoring and real-time facial recognition in public spaces. The GDPR fairness principle is also reinforced by the requirement to mitigate bias and discrimination in the development, deployment, and use of AI systems.
Transparency
The AI Act complements the GDPR by mandating user awareness when interacting with AI systems, and where high-risk AI systems are concerned, by requiring clear explanations of how data influences the AI decision-making process.
Purpose limitation and data minimisation
Under the GDPR, data must be collected for specific purposes and limited to what is necessary. The AI Act reinforces these principles, especially for high-risk AI systems, for which the intended purpose must be clearly defined and documented.
Data accuracy
The GDPR requires data accuracy, which the AI Act strengthens for high-risk AI systems by requiring the use of high-quality and unbiased data to prevent discriminatory outcomes.
Storage limitation
The GDPR limits data storage to what is necessary for the processing (subject to certain exceptions). The AI Act does not add any extra requirements in that respect.
Automated decision-making
The GDPR allows individuals to challenge solely automated decisions which have a legal or similarly significant effect on them, while the AI Act emphasises proactive meaningful human oversight for high-risk AI systems.
Security of processing
Both the GDPR and the AI Act mandate security measures for data processing. The AI Act highlights unique risks in AI systems, such as bias and manipulation, and requires additional security measures such as identifying and planning for potential problems, continuous monitoring and testing and human oversight throughout the development, deployment, and use of high-risk AI systems.
Data subject rights
The GDPR grants individuals rights over their personal data, such as access, rectification, and erasure. The AI Act enhances these rights by requiring clear explanations of how data is used in AI systems.
Accountability
Both the GDPR and the AI Act stress the importance of organisations demonstrating accountability. For AI systems, this includes risk management, clear documentation on the design and implementation of AI systems, human oversight for high-risk AI systems and incident reporting mechanisms.
Finally, the Guidance shows how to apply all these requirements to a specific use case, namely a car insurance premium calculation system.
János Kopasz looks at Hungary's approach to regulating AI Act compliance.
Hungary has taken a significant step in implementing the EU’s AI Act through Government Decree 1301/2024, which foresees the creation of a dedicated regulatory body under the Ministry of National Economy. This body will be responsible for overseeing both notifying and market surveillance duties as required by the AI Act, ensuring the possibility of 'one-stop-shop' administration for AI-related matters. It will also serve as the sole point of contact for fulfilling regulatory tasks related to the Act, simplifying procedures for AI developers and businesses.
In addition to these responsibilities, the future regulatory body will also be tasked with creating and operating a regulatory sandbox, a controlled environment that allows developers to test AI systems before market deployment. This sandbox will ensure that AI technologies can be developed and tested in compliance with safety, legal, and ethical standards, promoting both innovation and adherence to regulatory requirements.
A distinctive feature of Hungary’s approach is that, unlike in several other EU Member States, the responsibilities for AI regulation will not fall under the jurisdiction of the Data Protection Authority. Instead, the creation of a dedicated regulatory body emphasises the broader interdisciplinary nature of AI regulation, recognising that AI extends beyond data protection. This approach reflects a more comprehensive strategy for addressing the wider societal, economic, and technological impacts of AI but is at odds with the views expressed by the EDPB in its July 2024 Statement which recommended that Member States designate their Data Protection Authorities as their Market Surveillance Authorities under the AI Act.
The decree also envisions the establishment of the Hungarian Artificial Intelligence Council, a body comprising representatives from several key national institutions, including the National Media and Infocommunications Authority (NMHH), Hungarian National Bank (MNB), Hungarian Competition Authority (GVH), National Authority for Data Protection and Freedom of Information (NAIH), Supervisory Authority for Regulated Activities (SZTFH), and the Digital Hungary Agency. The Council will provide strategic guidance and official opinions on AI-related regulatory and policy matters. Its composition reflects the complexity of AI regulation, requiring insights from various sectors to address the multifaceted legal and compliance challenges AI presents. This wide-ranging representation highlights the fact that AI governance encompasses diverse legal fields, including data protection, financial regulation, competition law, cybersecurity, and telecommunications and media law.
The broad representation in the Council underscores the challenge that AI development and compliance present for companies. Businesses developing and deploying AI systems will need to navigate not only the specific requirements of the AI Act but also the intersecting regulations from various legal domains. The holistic, multidisciplinary approach is intended to ensure compliant and ethical AI operations. The increasing complexity of AI governance highlights the growing importance of responsible digital corporate governance in the ongoing digital transformation. Without such an approach, businesses will face increasing difficulty in ensuring AI systems are both compliant and aligned with the numerous regulatory requirements across sectors. This also means that non-compliance with AI regulations could result in multiple penalties under different laws, in addition to the AI Act's own substantial fines.
The decree sets a deadline of 30 November 2024 for the Minister of National Economy to prepare a proposal outlining the necessary legislation, related measures, and an assessment of the impact on the central budget. This proposal will detail the steps required to establish the regulatory body, the sandbox, and the council. More specific information about these developments will become available after this date.
von Katie Chandler und Esha Marwaha
von Susan Hillert, geb. Lipeyko, Lic. en droit (Toulouse I Capitole)
von Debbie Heywood