One thing is certain looking back at the year 2024 and ahead to 2025: AI is here and it is here to stay. But 2024 developments also raise some questions about the evolution of AI: what form will it take? How will its use be restricted? And, crucially, how will AI impact us, our environment, but also future generations? Gather ‘round our digital fire while we discuss the state of play and provide you with our expert insights into 2025 and what we expect to be the main developments relating to AI.
The EU AI Act – then, now and the future
The ground-breaking EU AI Act entered into force on 1 August 2024, but no obligations or requirements apply yet. Most in-scope companies have been moving full steam ahead to prepare and here are the most important milestones that are likely to impact companies in 2025:
- On 2 February 2025 chapter I and II of the AI Act will become applicable, prohibiting the use of AI systems that employ techniques that influence a person’s consciousness, or purposefully employ manipulative or deceptive techniques to make people or groups of people take decisions that they would normally not have taken if those techniques had not been employed.
- In May 2025, codes of practice are scheduled to be ready. These are not binding, but are aimed at improving AI Act compliance and providing further guidance in interpreting its requirements.
- 2 August 2025: further rules under the AI Act will become applicable, including those related to the notified bodies, general purpose AI-models, governance, confidentiality and penalties.
- Other key provisions will become applicable over 2026-227 as we discuss here.
These timelines are quite short and preparing and organising for the likelihood of high-risk AI systems being used or provided is no easy feat. This is the reason why we have seen clients becoming increasingly occupied with making sure they can hit the ground running by 2 February 2025. We have a great article on that, but businesses should start by:
- making an inventory of all AI systems used and working out (and documenting) whether they qualify as an AI-system under the AI Act, and if so, the category they fall into
- preparing the company for the requirements around AI literacy, including by developing training programs to educate those involved in AI-systems pursuant to the requirements of the EU AI Act
- determining whether they qualify as a provider, developer, deployer and/or importer under the EU AI Act, noting that even non-EU based entities can be caught.
We expect the EU AI Act will be put to the test for the very first time in 2025 as the first obligations and the enforcement and penalties provisions begin to apply. The star of the show will, of course, be the prohibition on certain types of AI systems and we have already seen regulatory authorities (e.g. in the Netherlands) reach out to the industry for input on this. At a later stage, the rules on high-risk AI systems will kick in which are likely to impact a much wider group of organisations.
Read more about the AI Act obligations here and about how they impact particular sectors here.
The AI Act may be the biggest Act (ahem) in town, but another new (and relatively unknown) piece of EU legislation that may pass in 2025 is the AI Liability Directive. This aims to address non-contractual civil claims for damages (ie tort). It will mainly be relevant for damage claims from those that suffer damage from the use of AI, where there is no contractual relationship between the victim and the party that makes use of AI. A key goal of this Directive will be the reversal of the burden of proof for cases where AI has allegedly caused damage: it will be up to the party responsible for the AI-system to demonstrate that the AI-system was not responsible for the alleged damage. This is especially interesting in the context of AI systems that have a certain amount of 'autonomy', like AI agents, although the draft legislation has come in for considerable criticism (as discussed here) and may yet change or even fail. Before we go on to AI agents, it's worth mentioning that the UK is also likely to get AI legislation next year as we discuss here.
AI agents
While 2023 and 2024 were all about generative AI, 2025 is likely to revolve around the rise of AI agents. AI agents are AI systems – usually highly specialised generative AI applications - that can execute specific tasks by themselves at the instruction of human users or at the instruction of other (AI) systems. This enables them to perform tasks that involve multiple steps or elaborate problem-solving procedures that cannot be easily (or reliably) solved by traditional AI models.
As traditional large language models (LLMs) generate responses based on training data, they are restricted by the training data provided and limitations in reasoning. AI agents, however, call in the help of other tools/systems to obtain new information. This doesn't mean they are completely independent entities; they still require designing and training by humans, and are influenced by the way they are deployed and the input that is provided by humans. AI agents are, however specifically good at dissecting the query provided and then applying other tools (including other AI systems) to provide an optimal response. An example of this is using an AI agent to plan a business trip itinerary including flights, accommodation, agenda planning and also taking into account traffic flow, luggage carry-on capacity etc. to come to an optimal and efficient solution. This can go as far as allowing the agent to actually book all required facilities and flights.
AI and copyright infringement
2025 will also be an interesting year for the intersection between copyright and AI. The US already has quite a roster of pending litigation involving AI and copyright/licensing disputes. An extensive overview of such cases prepared by the George Washington University can be found here. The EU has been lagging a bit behind in terms of AI copyright disputes reaching the courts. In 2024, that changed and we expect decisions in appeal in 2025. Among the notable cases are:
- A Czech case where the dispute revolved around an image created by AI by the plaintiff. The defendant, a law firm, had used the image on its website. The court ruled that the AI-created image was not protected by copyright, in part because it was not clear that the plaintiff had given instructions to the AI. As European copyright law is generally harmonised, it will be interesting to see whether other European courts will follow suit and it is very likely that this will happen in 2025.
- Kneschke vs. LAION, a case before the Hamburg regional court that revolves around LAION’s use of a photograph of Kneschke that was included in a dataset created by LAION for (scientific) training purposes. Kneschke had uploaded this photo on a stock website that included the statement that the photos on that site may not be used for “automated programs” (the AI opt-out). The court ruled that the creation of such dataset for scientific purposes already constitutes scientific research and thus LAION can be covered by the scientific text and data mining exception. Kneschke has lodged an appeal so an update on this matter is expected in 2025. Read more here.
AI and its impact on humans and our environment
The increasing use of AI tools will continue to have an impact on us and our environment. Just a few headline examples:
- AI tools are frequently used in an HR environment, for example for screening résumés, writing covering letters and analysing job applications. While AI provides some advantages in this context, it also has potential to introduce unintended bias. These systems are only as strong as their training data. That is the reason why AI systems in the workplace will be heavily regulated under the AI Act and other legislation like the EU's Platform Work Directive and GDPR.
- 2024 was the year of elections and the rise of deepfakes has sparked a lot of media interest and societal concerns. During the US Presidential election, for example, the world was shown videos of presidential candidate Kamala Harris doing and saying unusual things - the result of deepfakes. Deepfakes are achieving an unprecedented level of realism and are nearly indiscernible from real material. In 2025, we expect to see more and more companies focussing on the detection of deepfakes and combating fraud committed through the use of deepfakes. Dutch-based frontrunner in this area seems to be DuckDuckGoose: it develops AI detection software specifically tailored to combat the abuse of AI in the areas of KYC, video conferencing and journalism.
- The impact of AI on the (physical) environment should not be underestimated. As AI tools grow more powerful, so does their thirst for electrical power. This is not only relevant from an environmental point of view, but also from the point of view of data centre operators. They must ensure that ample power is available to enable AI systems to keep going full steam ahead, while also striking the balance from an ESG perspective, a big challenge in these times of climate change. That being said, there is still a large degree of discussion on the scale of AI’s impact on the environment and even whether it is necessarily a bad thing. For example, NVIDIA CEO, Jensen Huang, suggested that we may all be better off nonetheless (“If the world uses more energy to power the AI factories of the world, we are a better world when that happens").
AI financials: volatile stocks and major investments
After years of seemingly never-ending optimism about AI, in 2024 some major AI-related companies took hits on the stock market in 2024. However, they were also very quick to recover. Significant investments are being made in AI companies, for example by tech giants Amazon and Microsoft; no doubt that this will fuel further growth of AI in 2025. By way of example:
- NVIDIA, the designer of Graphical Processing Unit’s for computers that also produces the majority of chips used for AI-purposes saw stock prices take a 25% drop mid-2024. However, in context, NVIDIA stock had risen approximately 200% since early 2024 and the stock price went on to recover, surpassing previous highs.
- Alphabet (Google’s parent company) experienced a similar pattern, peaking in July 2024, taking a steep drop in October 2024, and now starting to recover. Alphabet is heavily involved in AI development including with its autonomous Waymo taxi-service and its AI chatbot Gemini.
- Amazon invested a total of USD8 billion in AI-start up Anthropic.
- Microsoft invested over USD13 billion in OpenAI.
- OpenAI’s Sam Altman is reportedly pursuing a USD7 trillion – and no, that is not a typo – investment to advance the semiconductor industry that powers AI systems.
AI in the US
The outcome of the US Presidential election is expected to influence the use and development of AI in the USA and beyond. In our 2024 forecast, we briefly outlined the Biden administration’s efforts (through an AI executive order in 2023 and forming a new AI Safety Institute in 2024) to bolster the advancement and implementation of trustworthy AI. President-elect Trump has already announced he will reverse this executive order, alleging that it hinders innovation. The impact remains to be seen and probably won't become fully apparent next year. See here for more predictions on the impact of the US election result.
Staying on top of developments
This all goes to show, things in AI continue to move at a breakneck pace and keeping up with industry and legal developments will remain a challenge. Fortunately, we will continue offering key updates and insights in 2025. You can view all our insights on our dedicated AI page and you can also sign up to AIQ, our quarterly AI news update. See you on the flipside!