2. Dezember 2024
As artificial intelligence (AI) continues to transform industries, its impact on intellectual property (IP) law is becoming more pronounced. At the FT AI Summit 2024, a fireside chat with Ryan Abbott, Professor of Law and Health Sciences at the University of Surrey, Giles Crown, Partner at Taylor Wessing, and Javier Espinoza, EU Correspondent at the Financial Times, explored the evolving challenges and opportunities AI presents to IP law. The discussion addressed key questions about ownership, infringement, and the future of IP protection as AI-generated works become more common.
One of the central issues discussed was the distinction between works created by human authors with the aid of AI and works generated entirely by AI. Traditionally, IP law requires human input for copyright and patent protection, but AI’s growing capabilities complicate this. When AI generates a work with little to no human intervention, does it still qualify for protection under existing IP frameworks? This question is critical for determining who owns AI-generated content and whether traditional IP laws can keep pace with AI’s rapid development.
The conversation also explored the potential impact of AI on innovation and the rise of patent and copyright trolls. As AI can now create vast amounts of content, from inventions to artistic works, it could lead to the creation of massive databases of intellectual property. This raises significant risks related to infringement, with AI-generated works potentially being derivative of pre-existing content. The panel discussed how this could lead to costly legal challenges, making it harder for businesses to navigate IP rights and infringement risks in an increasingly AI-driven landscape.
A key takeaway from the fireside chat was that existing IP laws are not equipped to handle the complexities of AI-generated works. The panel agreed that relying on outdated legal frameworks and courts to apply laws designed for a pre-AI world would not suffice. “Hard cases make bad law,” they argued, highlighting the need for a comprehensive policy review. The challenge, they noted, is to assess the pros and cons of extending IP protection to AI-generated works, considering the roles of AI providers, users, and other stakeholders. A broader approach is needed to address the scope and duration of IP protection for AI-generated content.
Another significant point raised was what exactly should be protected in the age of AI. The panel questioned whether the investment in AI models—often not inherently creative—should qualify for traditional IP protection. While patents and copyrights are meant to protect human creativity, AI models do not involve the same level of creative input. Experts suggested exploring alternatives such as database rights and contract protections, which may be better suited to addressing the nature of AI-generated works. This raises the question: Should IP law evolve to protect AI as a tool, or should we focus on protecting the data and models that underpin AI, rather than the works AI produces?
In the UK, the Copyright, Designs and Patents Act (CDPA) includes provisions on copyright ownership for computer-generated works, but it raises questions about the test of originality. The current law requires a human creator for copyright ownership, yet AI-generated works often lack the human “input” necessary to qualify as original. The panel discussed how this presents a challenge, as AI-generated works may not meet the criteria for originality due to the absence of human involvement. The panel noted that existing copyright textbooks suggest that ownership should only arise where the work would be original if created by a human. However, AI-generated works might fail this test because they lack the “right kind” of human input, leading to uncertainty over ownership.
The issue of infringement in relation to AI-generated works is equally complex. Traditional copyright infringement is determined by whether a substantial part of the author’s intellectual creation has been copied. However, when an AI generates a work, the question becomes: how do we assess whether the work has been copied when there is no human creator involved? The panel drew comparisons to cases like Andy Warhol’s Heinz baked bean cans, where infringement was based on the artistic interpretation rather than the object itself. In AI-generated works, there may be no intellectual creation of the author, and the work is based entirely on the AI’s processing of pre-existing content. This raises questions about how to assess infringement in these cases and whether the existing frameworks can adequately address AI’s role in creation.
The discussion at the FT AI Summit 2024 left no doubt that IP law must evolve to keep pace with AI. As AI continues to play a larger role in creating content, current IP laws are being stretched beyond their limits. The panel stressed the importance of developing a more flexible and nuanced approach to IP law—one that considers not just the technology but the role of human creativity, the rights of AI providers and users, and the broader societal implications of protecting AI-generated content.
Ultimately, the future of IP in the age of AI is uncertain, but reform is needed. Whether through policy review, legislative change, or new legal frameworks, businesses, lawmakers, and creatives must work together to navigate this new landscape of intellectual property.
As AI continues to evolve, the questions surrounding IP will only grow more complex, and the need for a comprehensive and forward-thinking approach to AI-generated content will become increasingly urgent. The future of AI and IP is being shaped now, and it will be up to all stakeholders to ensure that the legal frameworks are fit for this new era.
von Nick Harrison und Giles Crown