2 of 6

8 April 2024

The video game industry in 2024 – 2 of 6 Insights

AI and games

Xuyang Zhu and Martijn Loth look at the UK and EU approach to regulating AI and how that is likely to impact the video game industry.

More
Authors

Xuyang Zhu

Partner

Read More

Martijn Loth

Counsel

Read More

The video game industry was an early adopter of AI in its quest to provide enhanced game play and to develop and publish innovative games at speed while reducing costs. Automation of non-player characters (NPCs) and personalisation of gameplay have long been AI-driven. As the uses of AI in games continue to evolve alongside the legal framework, game developers, studios and publishers have a variety of legal issues to consider.

The UK approach

Unlike the EU whose new AI Act looks to govern the use of AI top-down, the UK government decided not to legislate to create a single function to govern the regulation of AI.  It has instead elected to support existing regulators develop a sector-focused, principles-based approach.  Regulators including the ICO, the CMA, the FCA, Ofcom, the Health and Safety Executive and the Human Rights Commission will be required to consider five principles to build trust and provide clarity for innovation and will produce non-statutory guidance. 

While specific guidance will undoubtedly be helpful and is already being produced by some regulators, there are a number of issues to be aware of when using AI in or to develop games. 

Copyright and ownership

AI models need to be trained.  Where they are trained on existing creative works questions of copyright will be engaged. This becomes particularly relevant with generative AI models which are often trained on datasets and materials found online.  

The UK government has confirmed it will not be introducing an exemption for text and data mining (equivalent to the EU TDM exception).  For now, AI developers training generative AI models in the UK using unlicensed third party materials would need to rely on the 'temporary copies' exception.  This applies to copies that are transient or incidental, an integral and essential part of a technological process the sole purpose of which is to enable a lawful use of the work, and which has no economic significance.  The application of the exception to the machine learning context is currently untested and likely to be highly dependent on how the model is trained. The Getty v Stability AI case currently proceeding in the English High Court may shed light on the issue. Game studios and publishers may find themselves on both sides of this situation – some may wish to train their own generative AI tools (note that copyright infringement considerations do not arise if a studio trains an AI model on materials that it owns, eg from its previous games), whereas others may find their works are being used for AI training by third parties.  

Where game studios and publishers use third party AI models, the studio is unlikely to be considered liable for the way the third party model was trained.  However, the training process will affect the risk that the model generates outputs that infringe copyright or other intellectual property rights in the materials it was trained on.  Game studios and publishers considering using third party generative AI tools to create material assets, including through API integration, should carry out careful due diligence on how the tool was trained and obtain contractual protections from the AI provided if possible.   

UK copyright law acknowledges that a copyright-protected work may be computer-generated, but the work must nevertheless be original in order for copyright to subsist. The threshold for originality is fairly low but the amount of human creative input required has not been tested in the AI context.  It is therefore advisable for studios and publishers using AI to generate material creative assets (eg main characters, key plotlines) to ensure that prompts contain sufficient creative content to be protected in and of themselves, and that the generated output is further amended using human creativity.  These considerations may be less important where AI is used to generate non-material assets, eg sound effects and backgrounds.   

With respect to ownership, if AI-generated content is considered a "computer-generated" work (which may depend on the extent of AI versus human input involved), the author is considered to be the "person by whom the arrangements necessary for the creation of the work are undertaken".  This may be the individual that inputs the relevant prompts or it may be the AI provider – the point is currently untested.  Where third party AI tools are involved, the issue of ownership of generated outputs should be addressed in the contract between the game studio and the AI provider.  

Game players’ contributions are expected to be the most significant advancement of the games industry enabled by generative AI. Within this framework, gamers are most likely to use AI tools provided by the game studio or potentially third party tools integrated into a game via an API.  Insofar as copyright in any resulting generated outputs vests in the player, including in the player's prompts, securing transfer of ownership of player copyright to the studio may be achieved under contract (eg the relevant EULA or other terms of use).  However, there remains the question (discussed above) as to whether the generated outputs are eligible for copyright protection in the first place.   

If players cause infringing AI-generated content to appear in online games, the publisher may be liable for making that content available to the public, but may also be able to claim the benefit of the hosting exception if it acts expeditiously to remove any infringing content it becomes aware of.  

Open source software (OSS)

OSS AI tools have emerged offering environments for training intelligent agents using deep reinforcement and imitation learning like the Unity Machine Learning Agents Toolkit (ML-Agents). These types of tool have quickly become popular with game developers. Using open source AI tools offers undoubted benefits, but also carries risks that need to be properly addressed to ensure the long-term future of the game relying on them. 

Personal data

To use AI to improve player experience, AI models need to be trained, which can involve tracking millions of player interactions and data points, including those of children.  This carries inherent risk in an already highly regulated area. 

The underlying principles which the UK government is proposing to require regulators to take into account when regulating AI overlap heavily with the UK GDPR principles. Given the issues around fairness, transparency and explainability, lawful basis and data security, not to mention the fact that many AI tools will process special data (eg biometric data), game studios and publishers using AI in games need to consider data protection every step of the way. A Data Protection Impact Assessment will also likely be required before using AI which processes or generates personal data.   

There are inherent conflicts between using data-driven AI, and data protection – for example, around data minimisation. The ICO has already produced extensive guidance on AI and data protection with more to come.

Ethics and accountability

AI algorithms are only as good as the data they are trained on and the potential for bias in training data, which can lead to discrimination or inequality issues with output, is well documented. Transparency, fairness and accountability should govern use of AI. Steps to ensure ethical use of AI may include:   

  • Assess bias and fairness: regularly evaluate AI algorithms for potential biases and unfairness, taking steps to mitigate them.
  • Obtain user consent and protect data privacy: ensure valid user consent for data collection and storage, adhering to relevant data protection laws.
  • Strive for transparency: provide understandable explanations of AI-driven features and mechanics to promote transparency and user understanding.
  • Prioritise user safety: implement safety measures and effective moderation systems to protect users from harm.
  • Comply with consumer protection laws: adhere to consumer protection regulations by providing accurate information and transparent communication about AI-driven features and potential risks.
  • Monitor AI systems and address issues: conduct regular audits, monitor AI systems for potential harm, and promptly address any issues or risks that may arise.
  • Stay informed and engage in industry discussions: keep up to date with emerging regulations and participate in industry discussions to contribute to responsible AI practices. 

Other issues

There may also be consumer protection issues, competition issues and online safety issues associated with the use of AI in games.

Reducing risk with using generative AI in games 

Game studios and publishers should have an acceptable use policy in place for employees and consultants that helps mitigate risk, ensures compliance with laws and ethical guidelines, and protects intellectual property while allowing employees and consultants to leverage the benefits of the technology. The policy should cover: 

  • scope and context of use of AI in the game
  • a requirement to consider any relevant third-party terms and conditions
  • special considerations around use of OSS
  • whitelisting and/or blacklisting use of AI – whether by tool or for particular purposes
  • obligations to protect confidentiality and trade secrets
  • privacy, data protection and cyber security requirements
  • human oversight and quality control procedures
  • ethical considerations and elimination of bias
  • ways in which compliance with the policy will be assessed and enforced.

Training on these issues should also be provided. 

EU AI Act

Many of the issues highlighted above are obviously not UK-exclusive and different jurisdictions are taking different approaches to dealing with them.  Game studios and publishers will, in particular, need to consider the applicability and impact on their activities of the EU’s AI Act, the final text of which was approved on 13 March. The AI Act is expected to be fully applicable and enforceable by mid-2025, but certain obligations may apply as early as the end of 2024.

The EU AI Act is an extensive regulatory framework that aims to regulate the entire supply chain of ‘AI systems’. Its definition of what constitutes an AI system will include several applications of AI in the game industry, such as content creation and animation enhancements through the use of generative AI, but the definition is also sufficiently broad to include finite state machines and behaviour trees.

Game studios and publishers creating, importing, distributing, integrating, or using AI systems will need to comply with obligations that become more elaborate and expensive as the risk of the AI system increases from (no or) ‘minimal risk’ to ‘high risk’ to ‘unacceptable risk’. The obligations imposed by the AI Act range from transparency duties (eg making it clear to users that they are talking to an NPC/bot) to establishing and enforcing full blown risk and quality management systems (in accordance with yet to be published industry standards), to the use of certain AI systems being completely prohibited.

Despite the European legislator confusingly having labelled AI-enabled video games as 'minimal risk' by default on its website (and many parroting this as a legal fact) – the latest text of the AI Act (presumed to be final) does not confirm any such favourable risk assessment and the use of AI systems that employ ‘subliminal techniques’ (arguably at the heart of immersive gameplay) could even be considered as prohibited under certain circumstances. Studios and publishers would do well to start mapping the technology in their game titles to the risk categories introduced, determining the gaps in their level of compliance, and implementing proper governance instruments (eg AI strategy and vision document, acceptable use policy, codes of conduct, etc.).

The AI Act also prescribes that deep fakes (ie AI systems that generate or manipulate images, audio or video content “that appreciably resembles existing persons, places or events” and would falsely appear to a person to be authentic) must clearly disclose that the content has been artificially generated or manipulated. Given that imagery in games often intentionally resembles real-life people, places or events (eg Keanu Reeves as Johnny Silverhand in Cyberpunk 2077, the Georgetown-surroundings in Fallout 3 or the Nijmegen Bridge-depiction in Medal of Honor: Frontline), studios and publishers should consider the most appropriate manner to disclose deep fakes to their players in a way which does not hamper the enjoyment of the game.

Likely inspired by the deterrent effect of the hefty penalties leveraged under the EU GDPR, non-compliance with the AI Act could expose businesses to fines of up to EUR 35 million or 7% of their total worldwide turnover, whichever is greater. Also similar to the EU GDPR, the AI Act will affect not just European studios and publishers, but also non-European ones depending on, among other factors, their targeting of the EU market and whether or not the AI system’s output is used within the EU.

An evolving approach

It remains to be seen whether the UK or EU approach to regulating AI is more successful and whether the EU's AI Act becomes the benchmark in a similar way to the EU GDPR.  In the absence of a global consensus, game businesses will need to keep on top of developments across the jurisdictions in which they operate.

Read more

Access the fifth edition of our Play Guide for more on key issues impacting the video game sector.

Access guide

Return to

home

Go to Interface main hub