2 / 5


– 2 / 5 观点

What games businesses need to consider when drafting a generative AI acceptable use policy

Martijn Loth highlights the top ten considerations to help games businesses mitigate risks associated with using generative AI when developing video games.


Martijn Loth


Read More

Martijn Loth


Read More

What’s the issue?

Generative AI (GAI) promises significant cost reductions, expedited workflows, enhanced productivity, and super-powered creativity. GAI can be used by games developers to auto-complete source code, to rapidly prototype a new character design simply by describing it in words, to synthesise natural-sounding dialogue between non-player characters (NPCs), to automatically generate subtitles in various languages, to automatically generate large numbers of assets (eg sunglasses, hairstyles, or helmets) for avatars in the same style as the original artwork, and much more.

Given these benefits, individual developers working for a games business, whether as employees or consultants, are likely to adopt a myriad of GAI-powered tools with or without the knowledge of the business. Letting this happen unchecked is likely to expose the business to leaks of confidential information, information security incidents, mishandling of intellectual property rights, and other risks.

What does this mean for you?

As a games developer or publisher business, you  should have an acceptable use policy in place that helps mitigate risk, ensures compliance with laws and ethical guidelines, and protects intellectual property while allowing employees and consultants to leverage the benefits of the technology. Here are ten things to consider when creating an Acceptable Usage Policy for Generative AI (GAI-AUP).

  • Scope and context: GAI can create source code, text, audio, images, and other content in response to text and voice prompts, and other input. Consider whether you want to regulate all of these types of GAI in the same way and whether you have other (contractual) arrangements that may be relevant to take into account for the policy. For example, your company may have a long-term volume commitment to a group of (human) voice actors backed by a penalty clause that may simply prevent you from using GAI to synthesise dialogues and narratives. Another example could be a confidentiality obligation that could arguably prevent your company from using GAI that is not hosted and run on-premise. Given that the context for GAI is likely to evolve, make sure to build in the option to review and change the policy.
  • Terms and conditions: unless the GAI was developed in-house, it will almost certainly be subject to third party terms and conditions. We often see these terms written in favour of the vendors and allowing storage and review of games developers' prompts and other input, and in certain cases allowing vendors to copy and reuse their prompts and other input for re-training the GAI model. You would, for example, not want your original artwork to be reused to generate game assets for avatars created by a competing company. Commercially offered GAI is often subject to certain bandwidth limits or call limits which can lead to costly over-use fees if not monitored and complied with. Take care to include a requirement to review the GAI’s terms and conditions in your policy.
  • Approach to regulation: consider how much scope you want to give your developers to explore new GAI on their own. We typically see two approaches: whitelisting – where all tools are OK to use until they are expressly prohibited, blacklisting – where all tools are prohibited until they are expressly allowed, but there may be a mix of the approaches depending on the nature of the tooling. For example, you may feel comfortable using GAI for source code auto-completion for back-end services but feel less comfortable using GAI to generate graphics or source code that are visible to customers and competitors.
  • Intellectual Property: for companies developing video games, intellectual property is your most prized possession and the success and business value of the company is tied to the careful protection of intellectual property rights. With the question of who owns what rights to a computer-generated work remaining a topic of intense legal discussion around the world, there is an inherent risk of diluting your company’s IP-related value if there is an over-reliance on GAI for source code generation or graphics generation. This is likely to be a factor for investors and buyers to consider when determining the value of your gaming company. On the other hand, when using GAI, it is also possible for developers to unintentionally infringe third party IP rights by reusing protected code or graphics. It is worth noting that a class action lawsuit was recently launched by opensource developers in the US alleging their code has been used to train the model behind Github Copilot without proper authorisation. While this suit is targeted at the creator of the GAI and not the user, it is not unthinkable that similar suits may be filed in the future against games developers and other users of GAIs.
  • Confidentiality and trade secrets: a lot of the work that goes into developing a game is done behind the scenes and developers will often want to make sure that all aspects of the game remain confidential until the company is ready to have it published. If the GAI is hosted by the vendor and run remotely via APIs or as a SaaS, it is not beyond the realms of possibility that confidential information can leak to the vendor or other users of the GAI if the input can be reused to re-training the GAI. We are increasingly seeing developers use source code auto-completion tools that (by default) use surrounding code and open files for context to improve code suggestions, but these may contain sensitive information such as access tokens, usernames, and passwords. Additionally, if you are protecting assets that hold commercial value and which are not known to the public, taking appropriate measures to protect their confidentiality is crucial to being able to claim trade secret protection in the EU. So be sure to take mitigating measures (eg some tools have optional settings for business users that allow you to opt out of your inputs being used for re-training purposes) into consideration when drafting your GAI-AUP.
  • Privacy and data protection: you should also consider whether the use of GAI entails a processing of personal data and, if so, what other measures need to be taken (eg data protection impact assessment, data processing agreement, additional technical and organisational measures, or a review of data transfer instruments) and how this fits within your data protection by design and by default policy (see our checklist here). GAI could, for example, be used to easily generate and send emails to a group of customers to test an early prototype of the game or to offer them a chance to provide early feedback on world designs. Read more about data protection and AI in games here.
  • Information security: ultimately, GAI is another piece of your application landscape. Accordingly, you will need to make sure that the security it implements aligns with your information security policy. This is particularly important if the GAI is hosted with the vendor (or a third party) and you are using the GAI remotely through an API or as a SaaS.
  • Human oversight and quality control: while GAI offers tremendous promise for its users, the technology is still in its early stages and it is currently far from predictable or foolproof. Allowing for the use of GAI without a human review process to ensure the quality of the results may expose your company to damages or reputational loss that could have easily been avoided, so consider embedding human oversight into your policy.
  • Ethical use of AI: regulators across the world have emphasised the need for companies using and developing AI – generative or not – to consider whether their use of such tools is responsible and ethical. Topics such as bias, transparency, diversity, non-discrimination, and environmental and societal wellbeing, should come to mind while drafting a GAI-AUP. For example, when using GAI for the generation of NPCs, gaming companies should ensure that the data set used to train the model is sufficiently inclusive of their player audience. One noteworthy example of a company doing this early on is Avalanche Software, a gaming company that has been praised for ensuring their character designs offer representative and inclusive experiences for all players in their action role-playing game, Hogwarts Legacy. See more on this here.
  • Compliance and sanctions: once you have a GAI-AUP in place, you will need to determine the best way to make the policy binding on employees (and contractors) and determine the most efficient way of monitoring compliance through both technical (eg group policies, network monitoring) and organisational means (eg mandatory training sessions and audits). We are increasingly seeing the GAI-AUP as a standard reference in employee handbooks in addition to general IT AUPs. Breaches of the GAI-AUP should have consequences depending on the severity of the violation, such as warnings, retraining, suspension, or even termination.

Please do contact us for help with drafting an Acceptable Usage Policy for Generative AI, AI procurement terms, or with reviewing third party terms and conditions for GAI.



前往 Interface主页