26. September 2024
Advertising Quarterly - Q3 2024 – 4 von 7 Insights
Generative AI holds immense potential to revolutionise marketing, and in many ways, it already has, offering unparalleled opportunities for creativity and innovation, as well as enhanced personalisation. A recent McKinsey report predicted a 10% productivity uplift globally in marketing alone as a direct result of the implementation of Gen AI, equating to approximately $463B of global spend.
The ability of Gen AI to enable anyone to create compelling content, analyse consumer behaviour, and optimise campaigns can significantly improve marketing efficiency. However, in deploying these capabilities, businesses need to ensure that appropriate organisational guardrails and guidance are in place in order to navigate the legal and ethical landscape. Here we consider how to prepare internal guidelines for the use of AI in marketing.
To effectively leverage AI in marketing, it is important to understand the capabilities and limitations of your chosen AI system. Not all AI is created equal and, much like human talent, different products have different strengths, weaknesses and idiosyncrasies. Testing out and ensuring familiarity with different products will help ensure responsible use of the technology and the optimisation of its potential, while also mitigating risks.
It may be worth limiting the tools that are approved for organisational use to a number of known and trusted providers, while also retaining some flexibility as the market rapidly evolves and different tools become available.
From a legal risk perspective, reviewing the terms of use for any AI system is vital. These terms outline how you can legally deploy the software, along with any restrictions on usage and potential liabilities. They will also set out the approach relating to rights, ownership and data processing. We outline some of the key terms of the main GenAI providers here.
Ensuring compliance with the terms, understanding the specific provider's approach and reflecting this in internal guidelines will help avoid legal repercussions and ensure the ethical application and deployment of the tools.
Care should also be taken to ensure that the approach adopted by a specific provider aligns with your wider internal policies and the law of the jurisdiction(s) in which you intend to deploy the tool. For example, if you intend to use an AI system for targeting, personalisation, profiling, customer communications (such as chatbots) or other forms of campaign optimisation which involves the processing of customer data, you should check that the way in which this is provided is compliant with applicable data protection legislation and, where necessary, is reflected in your privacy policy.
Many AI tools re-use prompt and output data to further train the models on which they are built. This is not necessarily a problem but can become one if care is not given to what information is provided to the tool. Generative AI tools may be influenced by the data provided to them by other users or even repeat information provided to them by other users.
Confidential, sensitive and proprietary information, private or personal data, and any intellectual property (whether belonging to you or to a third party), should therefore never be inputted into an AI system without appropriate justification. Many AI tools offer "closed garden" models whereby the input is either not reused at all, or is only re-used within the enterprise customer's system. If there is a specific advantage to being able to input information that otherwise should not be shared, it may be worth considering deploying one of these tools, although measures to avoid the undesirable influencing of outputs should still be implemented. For example, inputting discriminatory language and offensive or inappropriate content should always be avoided.
Given the uncertainty around whether copyright subsists in GenAI content, human intervention in the creation process is critical. Wherever content is purely AI generated, a mandatory second stage of human modification should be introduced so as to maximise the chances of demonstrating authorship in the final product.
The terms of use of any GenAI platform should help provide certainty in terms of what rights you hold, both in relation to the AI system itself and any inputs and outputs. Some free AI tools provide that the GenAI platform retains ownership of copyright in any inputs and outputs unless the user holds a business or enterprise licence. This can obviously restrict your ability to use GenAI content, as well as to continue to use inputs, commercially.
For example, where an agency is providing advertising materials for their client, the agency will need to ensure that they have the necessary rights in the materials to both comply with their contractual obligations, and to enable the client to commercialise the material in the way that is intended. Getting this right will help avoid contractual disputes and third-party IP claims, as well as enabling the applicable rightsholder to enforce their rights and prevent others from ripping off their advertising content.
Understanding these rights, ensuring that the correct licence is in place and clearly communicating what this means to your employees and end users will help manage intellectual property appropriately and avoid disputes.
The creative power of generative AI which underlies its value can also represent one of its greatest risks. Generative AI systems may inadvertently generate content that infringes third-party intellectual property rights, reflects biases and stereotypes in the training data, replicates confidential information provided to it, or is defamatory or constitutes hate speech. Hallucinations can produce complete fiction and utter nonsense, and the quality of output can vary massively. Where generating video or image materials, deepfakes of real people may be defamatory, an invasion of privacy, or constitute passing off (particularly if used in a commercial or promotional context).
To mitigate these risks, robust review processes by an appropriately trained (human!) person should be applied to all such outputs to identify and mitigate potential issues quickly. This is particularly important since the terms of most Generative AI platforms provide that users are liable for any infringements of third-party intellectual property rights or other civil or criminal wrongs (and include appropriate warranties and indemnities from the user to the platform).
While the growth of generative AI continues apace and the law plays catch up, sight should not be lost of the ethical issues around the use of AI in marketing. Use of generative AI should be made in a clear and transparent manner, regular audits should be carried out to identify and mitigate biases and other defects in output, and, to maintain trust, the integrity of marketing materials should not be compromised in the pursuit of enhanced efficiencies.
By deploying these principles when producing guidelines and ensuring appropriate staff training for the use of generative AI, you can leverage AI’s transformative power while safeguarding your legal, ethical and reputational standing. This balanced approach fosters trust with consumers and stakeholders, while facilitating innovation and driving efficiency.
26. September 2024
von mehreren Autoren
26. September 2024
von Simon Jupp, Louise Popple
26. September 2024
von Nick Harrison, Giles Crown
26. September 2024
von Giles Crown und Louise Popple
von mehreren Autoren
von Giles Crown