Here we are again. It feels like only yesterday that we were feeling “giddy and disoriented” about the 2023/2024 AI tsunami. One year later we were trying to make sense of the wave of AI agents coming at us and in 2025, AI stopped being a 'what if' and became a 'how-to' for organisations. 2026 will be different: the hype bubble has not burst but it has somewhat deflated – leaving something far more tangible and complex in its place. If previous years marked AI’s emergence, then 2026 will be mark the start of its maturity.
EU AI Act and high-risk AI systems – compliance is coming, we're just not sure when
By now, most organisations caught by the EU AI Act will have started creating an inventory and classification of their AI systems; prohibited AI systems should already have been phased out, and workforces should have received at least one round of role-appropriate 'AI literacy training' this year. Rules on GPAI models began to apply last August.
For those caught by the high-risk rules for Annex III systems, the date circled in red on next year’s calendar is 2 August 2026 – at least we think so. As things stand, this is when the majority of obligations for high-risk AI systems finally become applicable. However, this deadline was thrown into doubt when the European Commission published its Digital Omnibus proposal on 19 November 2025.
As any reader of the legislation will attest, the AI Act’s requirements are principle-based, leaving every provider and deployer of an AI system with one fundamental question: “so how do we actually comply?”. The answer is supposed to be the so-called harmonised standards. The European Commission has asked European standardisation organisations (CEN and CENELEC) to draft technical standards that define concrete approaches to meet the AI Act's vague requirements. Once these standards have been assessed and published in the Official Journal of the EU, using them will grant providers and deployers a "presumption of conformity" with certain parts of the AI Act, thus providing them with a clear how-to path towards compliance. Unfortunately, the standards are late; the initial deadlines of April and then August 2025 were missed. Delivery of the first batch of harmonised standards is now projected for Q3/Q4 2026. This means that in 2026 companies will be expected to have complied with the AI Act before the standards that explain how to secure (the presumption of) conformity are actually available.
Rightly recognising that it's hard to comply with rules when you don't know what you need to do to comply, the Digital Omnibus aims to tie the application of the Chapter III high-risk AI obligations, to the completion of the harmonised standards and guidance. It proposes that the rules will only apply once the Commission adopts a Decision confirming completion with an additional, a six month transition period for Annex III systems, and 12 months for Annex I following publicatoin of the Decision. However, if no such Decision is adopted, a back stop kicks in and the rules will apply from 2 December 2027 (Annex III) and 2 August 2028 (Annex I).
These are not the only reforms proposed by the Digital Omnibus in relation to the AI Act – see below for more on personal data considerations and see here for aother proposed changes – but the key question is whether the Omnibus can be passed in time for the compliance dates to be extended, which only creates more uncertainty.
Preparations for Chapter III AI Act compliance – whenever it happens – will carry on
For any company deploying (using) or providing high-risk AI systems within scope of the EU AI Act, this is no longer a drill. Whenever they come, the new rules are coming. Requirements include:
- implementing a risk management system (and in some cases a quality management system) to identify, evaluate, and mitigate risks throughout the AI systems’ lifecycle
- ensuring dataset quality through effective governance and monitoring
- enabling users to monitor operation, interpret output, remain aware of ‘automation bias’, and in specific situations, interrupt, or override the system
- ensuring the resilience of AI systems against unauthorised modification attempts and technical vulnerabilities, and
- drawing up and maintaining technical documentation to demonstrate conformity.
In the absence of harmonised standards, deployers and providers of AI systems would do well to look at other - more generic – standards to kick-off AI Act compliance, build their initial AI system policy house and organise governance. This rings particularly true for companies that have a much longer R&D cycle, such as pharmaceuticals and vehicle manufactures, and where the products of 2026/2027 are already well into pre-production and which may run the risk of delay to market if AI Act compliance has not been duly considered.
Code of practice for GPAI: guidance or guesswork?
Codes of Practice (CoP) are another means for industry standards to be developed in coordination with the AI Office. On 10 July 2025, the EU's General-Purpose AI (GPAI) Code of Practice was published. This CoP serves as an industry-developed and opt-in means of complying with certain parts of the AI Act applying to GPAI providers since 2 August 2025. Once endorsed by a provider, the CoP can help that provider show their conformity with the obligations in scope of the CoP.
However, while the CoP aims to clarify the law, it is still far from clear how the obligations described should be fulfilled in practice. Clients tell us that the CoP still raises significant uncertainties. One example is the requirement to reproduce and extract legally accessible, copyright protected content when crawling the web. The CoP presupposes the existence of a dynamic list of hyperlinks to be published by the relevant authorities in the European Union. However, this list is still missing, leaving providers unsure about how to execute the requirement in a meaningful way.
Given that other CoPs, such as the one for marking and labeling AI-generated content, are expected in 2026 as well, we would expect, or at least hope that subsequent CoPs take into account the lessons learned from the GPAI CoP.
2026: the year of the watermarks?
Under the AI Act's transparency rules, individuals need to be appropriately informed when they are interacting with an AI system. Providers of AI systems used to generate synthetic audio, images, video, or text content will need to ensure that the output is marked in a machine-readable format and that it is detectable as being artificially generated. And deepfake-labeling obligations are sure to become a headache as sales and marketing teams around the world argue with in-house teams on the means to label the AI-generated commercials and ads in a way that does not “hinder the viewing or enjoyment of the work”.
On 5 November 2025, the European Commission met with tech industry groups to develop a Code of Practice on the marking and labelling of AI-generated content. This CoP will, among other things, assist deployers using deepfakes or AI-generated content in clearly disclosing AI involvement, particularly when informing the public on matters of public interest.
Even so, the industry has already started with different labeling techniques. For example, OpenAI’s new video generator Sora 2 adds a watermark to any output. However, watermarking of video and photo generations faces an initial obstacle with the emergence of websites that make it possible to remove these marks. Labeling synthetic texts is far less straightforward. Will it require an on-screen disclosure? Or maybe a subtle linguistic marker in the text self, such as the dash-heavy sentences in texts generated by ChatGPT?
While a leaked draft of the Digital Omnibus proposed a one year delay to AI Act watermarking requirements, this did not materialise in the final version. We expect significant experimentation and debate throughout 2026 as developers and deployers search for a solution that is both technically achievable and meaningful to end-users.
More leeway on use of personal data?
The Digital Omnibus package comes in two parts – the first deals with changes to the AI Act and the second streamlines the EU data acquis, including the GDPR. The Digital Omnibus is central to the European Commission's push for 'simplification' to boost EU competitiveness, citing the "accumulation of rules" as an adverse effect on innovation and there are certainly some very helpful proposals relating to the use of personal data to train and operate AI systems:
- Legitimate interest for AI (new Article 88c): a new Article 88c explicitly states that processing personal data for the "development and operation of an AI system... may be pursued within the meaning of Article 6(1)(f) [legitimate interest]" unless Union or other national laws explicitly require consent. This is subject to the usual requirements to carry out a balancing exercise against the rights of individuals, and to apply safeguards, including data minimisation, transparency, and an "unconditional right to object"'.
- Bias detection: providers and deployers of all AI systems and models (not just high-risk) will be allowed to process special category personal data for the purpose of ensuring bias detection and correction, subject to appropriate safeguards.
Privacy advocates may be inclined to call this a 'blank cheque' for AI companies that could wreck the GDPR's core principles, and providers supplying AI systems trained on personal data will need to watch these developments closely. Particularly the legitimate interest addition is likely to prove highly controversial and it remains to be seen whether it survives the legislative process.
It will also be interesting to see whether other data regimes which prioritise EU adequacy will change as a result of this. In particular, the UK's changes to the UK GDPR allow the Secretary of State to introduce new recognised legitimate interests so we may well see something similar to the Digital Omnibus proposal in the UK.
2026: the year of the licensing deal?
On 11 November 2025, the Munich court in Germany ruled that OpenAI infringed copyright by using song lyrics in its AI models (read more). This case goes to the heart of how EU copyright law applies in the age of AI and it may be seen as an inflection point where the debate shifts from “is it infringement?” to “how to creators and rights holders get compensated?”.
Conversations we've had with industry stakeholders suggest that 1-to-1 licensing deals are not feasible because of the transaction costs (and barrier felt to innovation), but 2026 may be the year where the stakeholders sit down together to strike some kind of collective deal creating a collective management organisation for AI training data, much like we have for music royalties. For more copyright predictions, see here [link to Gregor's article] and you may also be interested in our AI copyright case tracker.
AI and liability – insurance companies will step up
In our 2025 AI predictions, we highlighted the substantial criticism surrounding the proposed AI Liability Directive. To the satisfaction of many stakeholders in the tech industry, the European Commission officially withdrew the proposal for the AI Liability Directive in October 2025. The key question now is whether an alternative approach or another type of regulatory initiative will be introduced to address liability issues posed by AI. This means it's important that stakeholders voice their concerns and suggestions, potentially shaping the future of AI governance in Europe.
The withdrawal of the AI Liability Directive raises questions about how AI-related liability will be handled, particularly in cases falling outside the scope of the harmonised Product Liability Directive. For now, these disputes will be handled under existing national liability regimes. It will be interesting to see how courts operationalise these principles in disputes involving self-learning and evolving systems.
2026 marks the beginning of a decade in which multi-party, data-driven and technically complex disputes will become more common. Documentation, responsible governance and rigorous testing will be essential. As we discuss here, explainability will not merely be a regulatory ambition, but will also function as legal defence.
We expect 2026 to be the year where the AI liability insurance market steps up accordingly to specifically to cover the novel risks (regulatory fines, new AI-specific torts, agent-caused damages) that existing cyber insurance policies typically exclude. You can follow our series on AI disputes here.
AI agents will actually join the team
Last year, we described the emergence of AI-agents: systems that can independently perform multi-step tasks. In 2026, their roles will shift from experimental support to structural integration within work processes. Workflows will slowly be redesigned, with AI playing a central role from the start. As McKinsey describes in its Global Survey on the State of AI 2025, companies which actively redesign their workflows to leverage AI effectively, will gain the most measurable result. Early adopters begin to show signs of this evolution, where smaller (human) teams coordinate multiple specialised (AI) agents. 2026 may show early signs of what some are calling “agentic organisation”.
In the light of this development, it is becoming a priority for both deployers and providers to promote standardisation (e.g. through CoP’s and harmonised standards) and interoperability.
As the technology matures, so will the threat actors
The AI Act requires an adequate level of cyber security for AI systems. In the age of AI, security is no longer just about adding a new firewall or strengthening passwords. AI security will be about protecting the data, the model and the integrity of the produced outputs, and new types attacks are on the rise, such as data poisoning attacks (where threat actors manipulate or corrupt the training data used to develop models). We expect 2026 to be the year that the industry – good and bad actors – matures alongside this technology.
AI is itself rapidly becoming an essential component of cyber security. AI can make cyber security systems faster, smarter and more proactive by enabling automated threat detection, efficient vulnerability analysis and fast incident response. These new capabilities allow organisations to detect and respond to attacks more quickly and accurately. 2026 will see enhanced cyber security tooling finding its way to more traditional cyber security defenses.
It does compute
2026 will not be remembered as the year AI emerged, but the year AI matured. Even if the Digital Omnibus goes through on time and there is a delay to rules on high-risk AI, it will be the year where we start to see how organisations prepare to translate the AI Act’s requirements and the technologies’ challenges into practical solutions.
Are you interested in staying on top of the future developments? You can view our insights on our AI page and sign up to AIQ, our AI news update here.