The gaming industry has always been at the forefront of artificial intelligence (AI). AI has been used by games studios for decades, including for automation of non-player characters (NPCs), enhancement of graphics and visual effects and personalisation of gameplay. The use of AI in gaming goes back to classic games like Pac-Man with its autonomous ghosts, each having distinct patterns and strategies, made possible through AI.
As the games industry looks towards the future, the spotlight is now on revolutionising both the development process and gameplay of video games.
This article explores the impact of generative AI in gaming from a copyright standpoint, examining the potential legal challenges, with comparative views from a French, UK, and German perspective.
Generative AI is expected to take game development to the next level. It will enable games developers to automate content creation processes, reducing development time, and offering a broader range of creative possibilities. Generative AI does not yet allow the creation of a complete video game without human intervention but it can generate some of its components, such as narratives and visuals.
No-code software tools, powered by generative AI, have also already gained popularity among games studios. These tools simplify the development process by eliminating the need for complex coding, enabling developers to focus more on creativity and innovation. They also allow other stakeholders involved in the development, such as games designers, to be less dependent on ideas testing, which tends to slow down the development process.
Generative AI is expected to drive greater personalisation of gameplay by allowing players to contribute to the creation process and to generate their own game components and become creators themselves, blurring the lines between developers and players. Through procedural content generation, integrated machine learning models analyse the behaviour of players to tailor the game and even, potentially, to make it unique to each player. Through their textual inputs or prompts, players will soon be able to create characters, items, levels, and other game assets. This will become a key aspect of a more immersive gaming experience.
These examples are only few illustrations of the enhancements that generative AI will bring to the gaming industry. But, as in other sectors, generative AI also brings up questions regarding copyright infringement and protection of these new creations.
The first legal challenge for games studios is to ensure at the input layer of AI that machine learning on pre-existing datasets does not infringe copyright.
Until now, machine learning was essentially executed relying on data deriving from the game itself, and mostly through the analysis of players’ behaviour which has raised data privacy issues. Copyright has been less relevant as the corresponding data is likely to be owned by the games studio already. Similarly, where studios train generative AI models using their own creative assets as training inputs, the copyright infringement risk associated with the training process is likely to be manageable.
The training of generative AI models on giant datasets raises additional issues. The training of generative AI models on datasets and materials found online has already led to several disputes across the world, many grounded on copyright infringement and whether any exceptions to infringement apply.
AI development has been taken into consideration in the text and data mining exception to copyright infringement provided under the EU Directive 2019/790 on copyright as implemented across EU jurisdictions (EU TDM exception). Under this exception, reproduction and extraction of copyrighted works for text and data mining purposes shall not constitute copyright infringement to the extent that access to said works is lawful and that the copyright holder has not opposed it by appropriate means, such as machine-readable means in the case of content made publicly available online. Reproductions and extractions made may be retained only for as long as is necessary for the purposes of TDM.
The EU TDM exception has been implemented fairly literally in German law as Section 44b of the German Copyright Act. According to the explanatory notes in the implementing bill, the reservation by the rights holder may also be made in the terms and conditions or the “impressum” of works accessible online, as long as it is machine readable. There are ongoing discussions in Germany (but so far no known case law) on the question of whether and to what extent the exception covers not only analysing text and data, but also machine learning and training of (generative) AI, although recital 18 if the DSM Directive arguably favours a wider interpretation allowing ML as well as training of AI.
Depending on the technical details, training of AI may also be covered by the exception for temporary acts of reproduction under Article 5(1) of the InfoSoc Directive (implemented as s44a of the German Copyright Act).
The “scraping” of databases protected under copyright has been the object of a number of decisions by German courts as well as of the Court of Justice of the EU. An infringement may occur through the extraction/reproduction of the whole database or a substantial part. In addition, according to the ECJ’s case law, the acts must adversely affect the investment of the maker of such a database. An infringement may also occur, in the event of a repeated and systematic extraction/reproduction of insubstantial parts of the database. Notably, the EU TDM exception also applies to the sui generis database right. If the exception does not apply, the scraping may accordingly infringe the owner’s database rights. In addition, the German Federal Supreme Court has held in a 2014 case involving an online flight comparison website that scraping of airline websites may constitute an act of unfair competition. The mere fact that the respective acts do not comply with the airline’s website terms and conditions that have been accessed, has not in itself been seen as sufficient. If, however, scraping involves a circumvention of technical access restrictions, such acts may be seen as unfair competition.
The UK has not implemented the EU TDM exception. There has been discussion among policymakers of expanding the UK's existing TDM exception that applies to non-commercial research to cover commercial purposes. While the government has decided not to proceed with these proposals for now, there remain policymakers supportive of a broader TDM exception. Separately, the UK Intellectual Property Office is producing a code of practice to support AI providers to access copyright works as a training input while ensuring there are protections on generated outputs to support rightsholders.
For now, games studios training generative AI models in the UK using unlicensed third party materials would need to rely on the 'temporary copies' exception. This applies to copies that are transient or incidental, an integral and essential part of a technological process the sole purpose of which is to enable a lawful use of the work, and which has no economic significance. The application of the exception to the machine learning context is currently untested and likely to be highly dependent on how the model is trained. In English High Court proceedings brought by Getty against Stability AI, Getty alleges (among other things) that Stability AI infringed copyright by using images scraped from its website to train the Stable Diffusion model, including by making and storing copies of the images in the training process. It is unclear at present whether Stability AI will seek to rely on any temporary copies exception and whether the court will accept that the exception applies if it does so.
Where games studios use third party AI models, the studio is unlikely to be considered liable for the way the third party model was trained. However, the training process will affect the risk that the model generates outputs that infringe copyright or other intellectual property rights in the materials it was trained on. For example, in the Getty v Stability AI proceedings in the English High Court, Getty claims that images generated by Stable Diffusion are very similar to Getty's images used to train the model, including by bearing a "Getty Images" watermark, and that they infringe Getty's copyright and trade marks. Games studios considering using third party generative AI tools to create material assets, including through API integration, should carry out careful due diligence on how the tool was trained.
Another question raised by generative AI in the field of copyright law is that of ownership of rights to creations generated by AI. In the context of gaming, this question arises for both the generative AI creations initiated by game developers and those resulting from player interaction (user generated content).
With respect to the use of generative AI in the games development process, the first question that needs answering is whether the resulting creations can still fulfil the “originality” criteria required for copyright protection. Originality is intrinsically attached to human involvement in the creation process so if generative AI is used to create content, who owns the copyright in that content?
In France, existing case law confirms that creations created by computer systems can benefit from copyright protection to the extent that they show even the slightest hint of originality intended by the author. Within the context of generative AI, it is questionable whether fully AI-automated creation based on prompts which constitute mere functional expressions of the user's requirements, will meet originality criteria. However, if it can be shown that the prompts in themselves meet the originality criteria and are reflected in the AI-generated output which could qualify as a derivative work, then the output itself may be protected by copyright.
The French Higher Council for Literary and Artistic Property (CSPLA) addressed the issue in its 2020 report on the impact of AI in cultural sectors. The CSPLA is inclined to consider protection of generative AI creations, to ensure effective return on investments made in the AI solutions themselves. One possible option is to create a new sui generis right for AI creations, similarly to the rights currently afforded to database producers. If adopted, this could give games studios reassurance.
German copyright law, in line with French law, only grants protection to works which qualify as individual human creations. So while the AI system itself will generally be protected (typically as a computer program), results wholly created by AI will not. Equally, 'downstream' creations by a generative AI tool will not be attributed to the developer of the AI tool. At the same time, where generative AI was merely used as a tool to create a work that otherwise still has sufficient human individuality, the work will be protected by copyright. Due to the risk of missing out on copyright protection, games creators and developers using generative AI as a tool will be well advised to document the human input in order to be able to show that copyright does exist.
Notably, certain neighbouring rights (attaching to eg sound recordings or sui generis databases) do not require a human creation and may accordingly enjoy copyright protection even if fully generated by AI.
UK copyright law acknowledges that a copyright-protected work may be computer-generated, but the work must nevertheless be original in order for copyright to subsist. Recent case law affirms that the threshold for originality is low and that even very simple works (eg a yellow circle in a blue background) can qualify for protection. Characters can also be protected as original literary works, and it is quite possible that prompts inputted into a generative AI tool could contain sufficient creative content to be protected in and of themselves. With respect to ownership, if AI-generated content is considered a "computer-generated" work (which may depend on the extent of AI versus human input involved), the author is considered to be the "person by whom the arrangements necessary for the creation of the work are undertaken". This may be the individual that inputs the relevant prompts or it may be the AI provider – the point is currently untested. Where third party AI tools are involved, the issue of ownership of generated outputs should be addressed in the contract between the games studio and the AI provider.
Game players’ contributions are expected to be the most significant advancement of the games industry enabled by generative AI.
Additional infringement risks arise in this context as players may seek to input prompts that take inspiration from third party assets, for example to create characters, environments or items that exist in third party games, TV programs, films or books. A games studio may mitigate this risk to some extent in the design of the relevant AI tool offered to players (eg building a tool that rejects prompts containing certain keywords relating to well-known third party assets). If players cause infringing AI-generated content to appear in online games, the publisher may be liable for making that content available to the public, but may be able to claim the benefit of the hosting exception if it acts expeditiously to remove any infringing content it becomes aware of. The risk of players creating UGC that infringes third party rights already exists today. However, the introduction of generative AI tools may increase the incidents of infringement by making creation easier or, depending on the factual scenario, may affect a publisher's ability to rely on the hosting defence.
On 21 April 2021, the EU Commission introduced a proposal for an Artificial Intelligence Act. While the original proposal did not address generative AI and copyright, the European Parliament’s position adopted on 14 June 2023 contains several references in that regard.
The European Parliament's compromise draft of the AI Act expressly states that foundation models (which include generative AI) raise "significant questions" related to the generation of content in breach of Union law as well as copyright rules, and that the AI Act shall therefore be without prejudice to EU copyright law
In addition, a provider of generative AI has to comply with a number of specific requirements. In particular, the provider must develop mechanisms that prevent the creation of illegal content and publish a “sufficiently detailed summary” of the copyright-protected training data. How these conditions, if enacted, are to be implemented in practice, remains to be seen.
The draft AI Act is currently being discussed by the EU legislative bodies and may become law as early as 2024, followed by a transitional period of 24 months before it actually enters into force.
Generative AI will certainly greatly advance the gaming industry, enabling streamlined development processes and empowering players to enhance their gaming experience. However, the resulting copyright issues raise considerable challenges which will no doubt be at the core of future statutory and case law developments.
Erik Steiner looks at the opportunities in the games industry offered by AI, and at how to manage the associated legal risks.
1 of 6 Insights
Desideria-Alexia Pohl looks at how to help ensure ethical use of AI in games.
2 of 6 Insights
Martijn Loth highlights the top ten considerations to help games businesses mitigate risks associated with using generative AI when developing video games.
3 of 6 Insights
Laura Craig and Miles Harmsworth look at the use of personal data in AI tools used by the video games sector, and at the evolving regulatory framework.
5 of 6 Insights
Marie Keup and Lucas de Groot look at how games developers and publishers can ensure they don't run into issues when using open source generative AI in their games.
6 of 6 Insights