2 von 7

4. Dezember 2023

Predictions 2024 – 2 von 7 Insights

AI predictions for 2024 - by a real person

Martijn Loth looks at what to expect from the AI tsunami in 2024.

Mehr
Autor

Martijn Loth

Counsel

Read More

Giddy. That’s how most of us following AI have been feeling over the course of 2023; a myriad of technological advancements and slick new AI tools have been playing a disorienting game of leapfrog with regulatory efforts. All the while players have been both cheered on and chastised by stakeholders on opposite sides of the spectrum of interests.

Peering into our crystal ball for 2024, we predict that businesses will keep churning out AI tools, but the velocity of their development and adoption may be impacted by shortages in the hardware needed to train AI, as well as legal concerns regarding the use of AI (and AI output). The EU AI Act is expected to enter into force in 2024 and businesses developing, selling or using AI systems or components will have limited time to get their affairs in order.

AI systems will continue to be developed, improved, and adoption is expected to increase

2023 is likely – and rightfully – going to be called the year of generative AI (ie tools trained using huge amounts of data and capable of generating outputs from a variety of prompts). According to McKinsey, 79% of respondents to their annual survey reported exposure to generative AI and 22% confirmed its regular use. Some highlights of this year include (in no particular order):

  • OpenAI introducing their subscription-based version of ChatGPT; enabling third party developers to incorporate ChatGPT (ie their language model) and Whisper (ie their voice-to-text model) into 3rd party applications (eg Home Assistant) through a pay-per-use API; the introduction of GPT4 – an even more powerful large language model (LLM); the quiet announcement to enterprise customers of the option to create and implement their own custom GPT4-models in collaboration with OpenAI’s staff (with pricing starting at a staggering USD 2-3 million!); image-capabilities being added to ChatGPT; and an AI-design company called Global Illuminatio getting acquired by OpenAI.
  • NVIDIA expanding its product portfolio with BioNeMo, an AI-powered platform that can use models like AlphaFold to predict valid protein structures accelerating the creation of new drug candidates. NVIDIA also expanded its partnership with Microsoft to more easily make its products available for Azure-customers.
  • AWS launching a new free, high quality, online training initiative called ‘AI Ready’ which promises to equip techies and non-techies with essential skills and knowledge to get them prepared for careers in generative AI (and addresses the shortage of skilled AI-personnel). In addition, the initiative entails setting up a scholarship program to further promote learning about this subject matter in high schools and universities around the world.
  • Google releasing Bard (its chatbot response to ChatGPT), and Microsoft’s announcement that it would start incorporating ChatGPT in its product portfolio. Google also introduced PaLM2, its new large language model meant to compete with OpenAI’s GPT4.
  • Stability AI releasing Stability Audio, a text-to-audio generative model trained on a huge data set of stock audio samples.
  • HeyGen making waves across social media with their generative AI-powered service enabling video content creators to generate translations of their videos in a multitude of different languages in the creator’s own voice and likeness while automatically adjusting facial expressions and lip movements to match the translated words.
  • researchers from MIT developing new tools like PhotoGuard which can encode photos with special tags that prevent AI-models from using them for non-consensual deepfakes by tricking models into using the wrong information for image generation. Researchers from the University of Chicago presented Glaze, a tool that works by detecting an artist’s style and encoding photos produced by the original artist with special tags that hinder or prevent style mimicry.

We expect 2024 to continue in a similar fashion. Businesses like OpenAI, AWS, Microsoft, Google, IBM, Meta, and NVIDIA, will continue to gain ground as the major vendors in the AI value chain, and are likely to start tailoring their product portfolio to more specific industries (eg life sciences, financial services, cyber security, etc.). Market leaders also offering third party developers access to their (upstream) data, services and models (eg through downloads or API’s) are certain to bolster their leadership positions and bottom lines, but will also help accelerate the development and availability of better AI-powered products and services downstream.

In light of increased litigation and general uproar from original content creators, we also expect more tools like PhotoGuard and Glaze to be released in 2024. In fact, Glaze supposedly has an add-on coming out in 2024 called ‘Nightshade’. Nightshade will allow artists not just to add prevention tags via Glaze, but also add tags that could “poison” data sets (eg tricking a model into believing a dog is a cat) and effectively ruining a model’s ability to function if it ingested enough poisoned data.

Development of AI may be constrained by hardware shortages upstream

AI-powered products and services cannot be developed without sufficiently powerful computing resources somewhere in the AI value chain. Chip-manufacturer NVIDIA has reigned supreme in this area with its A100 and H100 GPUs being in extremely high demand over the past year. While the statistics are unverified, some have suggested that training a model like GPT4 could require as many as 25,000 GPUs and with the rise of generative AI systems, product development for some AI pioneers has allegedly been constrained due to severe GPU shortages. Manufacturers such as Intel and AMD have offered impressive alternatives, but one of the most commonly used Software Development Kits (SDKs) called ‘CUDA’ is limited to NVIDIA hardware and manufacturers like Intel and AMD will need to ramp up efforts to sway AI-developers to their respective ecosystems.

New generation GPUs have also been trailed for 2024 with NVIDIA announcing their H200-series and Intel to follow suit with Gaudi3, all capable of enabling bigger and better models to be developed in less time. It will be interesting to see whether or not the release of new iterations of hardware will help alleviate shortages. We believe much will depend on the manufacturers’ willingness to continue producing and supporting older generation hardware and the market’s ability to make use of those stocks. For example, businesses comfortable with not using the latest and greatest GPUs could continue building their infrastructure using older generation hardware at discounted pricing and will likely be able to procure them more easily through second hand markets once the newer generation arrives in 2024.

The geopolitical backdrop is also likely to factor into the shortages. Citing information security concerns, countries, such as the US, Japan and the Netherlands, have announced export restrictions that will impact certain countries’ ability to procure the most powerful GPUs and incentivise AI developers in affected regions to make do with less powerful hardware, seek powerful hardware from grey and black market vendors, or manufacture their own. That being said, export restrictions also target the availability of certain equipment needed to build chips in the first place, so the latter may not be feasible.

Courts could address copyright and licensing questions by content creators whose works have been used to train AI-models

AI vendors offering services have built their foundation models on huge amounts of data scraped from various places of the internet. As a result, they have been hit with law suits by original content creators on a variety of grounds, including general torts, breach of copyright, and breach of licence.

  • Microsoft, OpenAI and Github (U.S.) are facing class action suits by developers alleging that their source code was used to train Codex – the model behind Github’s text-to-code generation tool ‘CoPilot’ – in violation of the open source licences under which the developers published their source code.
  • OpenAI (US)’s GPT models allegedly trained on books written by members of the Authors Guild without permission and in alleged violation of copyright laws are being challenged by the Guild which represents authors including George R.R. Martin and John Grisham.
  • Meta (US) is also facing a class action suit by authors alleging their books were used to train Llama – a set of large language models (LLM’s).
  • Stability AI’s (UK) Stable Diffusion model – its text-to-image generation model is claimed by Getty Images to have infringed Getty’s copyrights by training on large amounts of stock images from its archives.

In the US, AI vendors seem to have been primarily relying on the doctrine that training a model on (copyrighted) data should be considered ‘fair use’. In the EU, there is no ‘fair use’ doctrine, but AI vendors could try relying on the text-and-data mining exception (TDM) in the Directive on Copyright in the Digital Single Market or relying on the exception for temporary acts of reproduction under the Copyright and Information Society Directive. The UK will only be able to rely on the latter as it has not implemented the TDM exception and – despite heavy discussion among policymakers  – has not yet expanded its existing TDM-exception to also cover commercial purposes (see more here).

In 2024 we expect to see courts in the US address the challenge of protecting works from being used without consent when training AI models, and maybe even address the bonus challenge of protecting creations made through generative AI. Given that we have seen significantly fewer law suits in the EU/UK concerning the use of generative AI, it might be too early to expect the same from EU/UK courts, but we may see more cases being initiated over here as well as we discuss here.

The EU's AI Act will enter into force in the first half of 2024

Last, but not least. 2024 is the year that we expect the text for the EU's  Regulation laying down harmonized rules on artificial intelligence (and amending certain legislative acts) (AI Act) to be finalised and for the AI Act to enter into force.

The draft AI Act shows that the EU is intent on an ambitious legal framework meant to govern the entire value chain surrounding the development, sale, distribution, and deployment of AI systems in the EU. Similar to the GDPR, the AI Act will have an extra-territorial effect. It  will adopt a risk-based approach where certain AI-based practices will be deemed unacceptable (and be prohibited), some systems will be deemed high-risk (and be heavily regulated and required to undergo conformity assessments), others will be deemed limited-risk (and only be faced with transparency-related regulation), and the rest (ie minimal or no-risk) will be left to voluntary compliance. The fines for non-compliance will be a major concern for most companies (and remain a nuisance for certain corporate juggernauts) with fines going up to EUR 30 million or 6% of worldwide turnover, whichever is greater.

Following the initial draft in April 2021, it took almost a year and a half for the Council to adopt its General Approach to the AI Act, but it finally suggested a number of changes in December 2022, including:

  • clarifying that the most cumbersome obligations of the AI Act should not apply to scientific research and development prior to the AI being placed on the market
  • expanding the list of prohibited AI practices (eg to extend the prohibition of social scoring to private actors) and prohibiting the sale, distribution and deployment of AI systems that exploit vulnerabilities based on social or economic situations. At the same time, the Council suggested clarifying certain exemptions for law enforcement to the ban of remote/real-time biometric identification systems using AI
  • expanding the list of high-risk AI in some areas (eg critical digital infrastructure and life and health insurance) while narrowing the list of high-risk AI in others (ie removing from the list: deep fake detection by law enforcement authorities, crime analytics, verification of the authenticity of travel documents) and suggesting the creation of a legislative mechanism to allow easier amendment of the list of high-risk AI in the future
  • suggesting an innovation-friendly approach that accounts for situations where AI systems are developed and meant be used for mutliple purposes, but where there may be circumstances where such general purpose AI gets integrated into another system which then still become high-risk downstream.

The European Parliament (EP) took considerably less time coming up with their response – publishing a vast amount of amendments to the AI Act barely six months after the Council’s. Key changes include:

  • narrowing the definition of AI and aligning it more closely with other international bodies to be better able to distinguish between sophisticated technologies and, for example, an Excel-sheet
  • defining foundation models and introducing obligations, including ones to: implement measures to identify, reduce, and mitigate reasonably foreseeable risks to health, safety, fundamental rights, the environment, and the rule of law; implement data governance measures (eg for suitability and against bias); monitor performance, predictability, security of the model throughout its lifecycle; use industry standards to reduce energy use, resource use and waste; maintain extensive technical documentation about the model and share it with users downstream; register the foundation model in an EU database; implement safeguards against generation in breach of laws and without prejudice to fundamental rights such as freedom of expression; and to document and make public a summary of the use of copyright-protected training data
  • introducing general principles applicable to all AI systems – seemingly referencing the High Level Expert Group’s Ethics Guidelines for Trustworthy Artificial Intelligence
  • expanding the list of high-risk AI systems with, for example, recommender systems used by certain very large online platforms (VLOP’s) and systems intended to influence the outcome of elections and referenda
  • expanding requirements in the context of high-risk AI, for example, requiring the personnel responsible for human oversight to be sufficiently “AI-literate” and be made aware of risks such as confirmation bias and the risk of automation
  • protecting downstream SMEs and start-ups from the bargaining power of major upstream players by ruling certain contractual terms (concerning the use or integration of high-risk AI systems) that have been unilaterally imposed to be unfair
  • the maximum fine has been increased to EUR 40 million or 7% of worldwide turnover (whichever is greater).

The European Commission, the Council, and the European Parliament are currently in the trilogues phase during which they will negotiate a finalised text of the AI Act. At the time of writing, what was envisaged as potentially the final trilogue meeting was due to take place imminently.  In a recent interview with IAPP in November 2023, Kai Zenner and Dragoș Tudorache,  two individuals closely involved in the triologues on behalf of the European Parliament, confirmed the three main, outstanding points to be resolved:

  • the exemptions to the list of prohibited practices requested by the Council in favor of law enforcement and national security
  • the scope of the regulatory burden on providers of foundation models
  • how to align enforcement of the AI Act at Member State level vs. at a more centralised EU-level.

Despite the first two topics being hugely political and three of the largest European economies (ie Germany, France and Italy) reportedly threatening to derail trilogues because of their objections to the EP’s proposal for heavy regulation of foundation model, none of the co-legislators will be keen on delaying negotiations beyond the elections for the European Parliament in 2024 which would be very likely to undo existing compromises and delay a finalised text substantially. We discuss the outstanding issues here.

We are still cautiously optimistic that the finalised text of the AI Act will enter into force by mid-2024 despite the rumours of stumbling blocks. After this, we would expect to see a sufficiently long transition period of 18-24 months before the AI Act applies in full. However, given the extensive scope of the legislation, regulated parties will have little time to get their affairs in order given the need to complete their impact assessments, prepare documentation, adopt appropriate strategies and policies for reporting, implement quality management and data governance programs, complete certification processes, complete vendor due diligence, and renegotiation of existing arrangements with certain vendors, and enhance employee education and awareness programs, etc.

Will 2024 see a new era of international cooperation on AI safety?

While the EU is leading the way in AI-specific legislation and will be keen to continue leading the international discussion on AI rules and standards by getting its own version out as soon as possible, other countries including China, the US and the UK, have also tackled the issue this year in varying ways.   The Biden administration in the US issued an Executive Order in October 2023 that delineates a comprehensive approach to bolster the advancement and implementation of trustworthy AI.  The US's moves to regulate AI were announced to coincide with the UK-hosted international AI Safety Summit which resulted in the Bletchley Park declaration, perhaps the start of a new era of international cooperation on AI safety with further summits to come. Other countries including the UK, as well as supra-national bodies, also announced measures to get to grips with AI regulation at and around the summit as we discuss here.

Next steps in 2024

Businesses operating in the AI value chain that want to get a head start on implementing standards  ahead of the AI Act, are recommended to monitor the work being done by Joint Technical Committee (JTC) 21, comprising members of the European Committee for Standardization (CEN) and the European Committee for Electro-technical Standardization (CENELEC), which has been tasked by the European Commission with drafting and delivering new AI standards. While the new standards are not expected until early 2025, we would expect 2024 to yield useful insights that can be used to inform AI-related strategies and policies when JTC 21 is set to report on its progress.

If you have reached the end of this article and are feeling slightly lost again, you are likely not alone. Don’t hesitate to reach out. Taylor Wessing is monitoring all the developments in the field of AI closely. We will update you about major news through our website and can set up tailored updates and periodic calls to help you filter out the bits that are relevant to your organisation.

Zurück zur

Hauptseite

Zurück zur Interface Hauptseite