Giddy. That’s how most of us following AI have been feeling over the course of 2023; a myriad of technological advancements and slick new AI tools have been playing a disorienting game of leapfrog with regulatory efforts. All the while players have been both cheered on and chastised by stakeholders on opposite sides of the spectrum of interests.
Peering into our crystal ball for 2024, we predict that businesses will keep churning out AI tools, but the velocity of their development and adoption may be impacted by shortages in the hardware needed to train AI, as well as legal concerns regarding the use of AI (and AI output). The EU AI Act is expected to enter into force in 2024 and businesses developing, selling or using AI systems or components will have limited time to get their affairs in order.
2023 is likely – and rightfully – going to be called the year of generative AI (ie tools trained using huge amounts of data and capable of generating outputs from a variety of prompts). According to McKinsey, 79% of respondents to their annual survey reported exposure to generative AI and 22% confirmed its regular use. Some highlights of this year include (in no particular order):
We expect 2024 to continue in a similar fashion. Businesses like OpenAI, AWS, Microsoft, Google, IBM, Meta, and NVIDIA, will continue to gain ground as the major vendors in the AI value chain, and are likely to start tailoring their product portfolio to more specific industries (eg life sciences, financial services, cyber security, etc.). Market leaders also offering third party developers access to their (upstream) data, services and models (eg through downloads or API’s) are certain to bolster their leadership positions and bottom lines, but will also help accelerate the development and availability of better AI-powered products and services downstream.
In light of increased litigation and general uproar from original content creators, we also expect more tools like PhotoGuard and Glaze to be released in 2024. In fact, Glaze supposedly has an add-on coming out in 2024 called ‘Nightshade’. Nightshade will allow artists not just to add prevention tags via Glaze, but also add tags that could “poison” data sets (eg tricking a model into believing a dog is a cat) and effectively ruining a model’s ability to function if it ingested enough poisoned data.
AI-powered products and services cannot be developed without sufficiently powerful computing resources somewhere in the AI value chain. Chip-manufacturer NVIDIA has reigned supreme in this area with its A100 and H100 GPUs being in extremely high demand over the past year. While the statistics are unverified, some have suggested that training a model like GPT4 could require as many as 25,000 GPUs and with the rise of generative AI systems, product development for some AI pioneers has allegedly been constrained due to severe GPU shortages. Manufacturers such as Intel and AMD have offered impressive alternatives, but one of the most commonly used Software Development Kits (SDKs) called ‘CUDA’ is limited to NVIDIA hardware and manufacturers like Intel and AMD will need to ramp up efforts to sway AI-developers to their respective ecosystems.
New generation GPUs have also been trailed for 2024 with NVIDIA announcing their H200-series and Intel to follow suit with Gaudi3, all capable of enabling bigger and better models to be developed in less time. It will be interesting to see whether or not the release of new iterations of hardware will help alleviate shortages. We believe much will depend on the manufacturers’ willingness to continue producing and supporting older generation hardware and the market’s ability to make use of those stocks. For example, businesses comfortable with not using the latest and greatest GPUs could continue building their infrastructure using older generation hardware at discounted pricing and will likely be able to procure them more easily through second hand markets once the newer generation arrives in 2024.
The geopolitical backdrop is also likely to factor into the shortages. Citing information security concerns, countries, such as the US, Japan and the Netherlands, have announced export restrictions that will impact certain countries’ ability to procure the most powerful GPUs and incentivise AI developers in affected regions to make do with less powerful hardware, seek powerful hardware from grey and black market vendors, or manufacture their own. That being said, export restrictions also target the availability of certain equipment needed to build chips in the first place, so the latter may not be feasible.
AI vendors offering services have built their foundation models on huge amounts of data scraped from various places of the internet. As a result, they have been hit with law suits by original content creators on a variety of grounds, including general torts, breach of copyright, and breach of licence.
In the US, AI vendors seem to have been primarily relying on the doctrine that training a model on (copyrighted) data should be considered ‘fair use’. In the EU, there is no ‘fair use’ doctrine, but AI vendors could try relying on the text-and-data mining exception (TDM) in the Directive on Copyright in the Digital Single Market or relying on the exception for temporary acts of reproduction under the Copyright and Information Society Directive. The UK will only be able to rely on the latter as it has not implemented the TDM exception and – despite heavy discussion among policymakers – has not yet expanded its existing TDM-exception to also cover commercial purposes (see more here).
In 2024 we expect to see courts in the US address the challenge of protecting works from being used without consent when training AI models, and maybe even address the bonus challenge of protecting creations made through generative AI. Given that we have seen significantly fewer law suits in the EU/UK concerning the use of generative AI, it might be too early to expect the same from EU/UK courts, but we may see more cases being initiated over here as well as we discuss here.
Last, but not least. 2024 is the year that we expect the text for the EU's Regulation laying down harmonized rules on artificial intelligence (and amending certain legislative acts) (AI Act) to be finalised and for the AI Act to enter into force.
The draft AI Act shows that the EU is intent on an ambitious legal framework meant to govern the entire value chain surrounding the development, sale, distribution, and deployment of AI systems in the EU. Similar to the GDPR, the AI Act will have an extra-territorial effect. It will adopt a risk-based approach where certain AI-based practices will be deemed unacceptable (and be prohibited), some systems will be deemed high-risk (and be heavily regulated and required to undergo conformity assessments), others will be deemed limited-risk (and only be faced with transparency-related regulation), and the rest (ie minimal or no-risk) will be left to voluntary compliance. The fines for non-compliance will be a major concern for most companies (and remain a nuisance for certain corporate juggernauts) with fines going up to EUR 30 million or 6% of worldwide turnover, whichever is greater.
Following the initial draft in April 2021, it took almost a year and a half for the Council to adopt its General Approach to the AI Act, but it finally suggested a number of changes in December 2022, including:
The European Parliament (EP) took considerably less time coming up with their response – publishing a vast amount of amendments to the AI Act barely six months after the Council’s. Key changes include:
The European Commission, the Council, and the European Parliament are currently in the trilogues phase during which they will negotiate a finalised text of the AI Act. At the time of writing, what was envisaged as potentially the final trilogue meeting was due to take place imminently. In a recent interview with IAPP in November 2023, Kai Zenner and Dragoș Tudorache, two individuals closely involved in the triologues on behalf of the European Parliament, confirmed the three main, outstanding points to be resolved:
Despite the first two topics being hugely political and three of the largest European economies (ie Germany, France and Italy) reportedly threatening to derail trilogues because of their objections to the EP’s proposal for heavy regulation of foundation model, none of the co-legislators will be keen on delaying negotiations beyond the elections for the European Parliament in 2024 which would be very likely to undo existing compromises and delay a finalised text substantially. We discuss the outstanding issues here.
We are still cautiously optimistic that the finalised text of the AI Act will enter into force by mid-2024 despite the rumours of stumbling blocks. After this, we would expect to see a sufficiently long transition period of 18-24 months before the AI Act applies in full. However, given the extensive scope of the legislation, regulated parties will have little time to get their affairs in order given the need to complete their impact assessments, prepare documentation, adopt appropriate strategies and policies for reporting, implement quality management and data governance programs, complete certification processes, complete vendor due diligence, and renegotiation of existing arrangements with certain vendors, and enhance employee education and awareness programs, etc.
While the EU is leading the way in AI-specific legislation and will be keen to continue leading the international discussion on AI rules and standards by getting its own version out as soon as possible, other countries including China, the US and the UK, have also tackled the issue this year in varying ways. The Biden administration in the US issued an Executive Order in October 2023 that delineates a comprehensive approach to bolster the advancement and implementation of trustworthy AI. The US's moves to regulate AI were announced to coincide with the UK-hosted international AI Safety Summit which resulted in the Bletchley Park declaration, perhaps the start of a new era of international cooperation on AI safety with further summits to come. Other countries including the UK, as well as supra-national bodies, also announced measures to get to grips with AI regulation at and around the summit as we discuss here.
Businesses operating in the AI value chain that want to get a head start on implementing standards ahead of the AI Act, are recommended to monitor the work being done by Joint Technical Committee (JTC) 21, comprising members of the European Committee for Standardization (CEN) and the European Committee for Electro-technical Standardization (CENELEC), which has been tasked by the European Commission with drafting and delivering new AI standards. While the new standards are not expected until early 2025, we would expect 2024 to yield useful insights that can be used to inform AI-related strategies and policies when JTC 21 is set to report on its progress.
If you have reached the end of this article and are feeling slightly lost again, you are likely not alone. Don’t hesitate to reach out. Taylor Wessing is monitoring all the developments in the field of AI closely. We will update you about major news through our website and can set up tailored updates and periodic calls to help you filter out the bits that are relevant to your organisation.
Graham Hann looks at predictions for technology and media in 2024.
1 of 7 Insights
Mark Owen looks at what 2024 holds for content.
3 of 7 Insights
4 of 7 Insights
Kachenka Pribanova predicts what's next for the advertising industry in 2024.
5 of 7 Insights
Thanos Rammos and Tim Schwarz look at the main developments in EU digital health regulation, with a focus on Germany, and at what 2024 is likely to hold.
6 of 7 Insights
Paul Voigt, Jo Joyce, Wiebke Reuter and Debbie Heywood look at what lies ahead in the increasingly regulated area of data and cyber security, particularly in the EU and UK.
7 of 7 Insights