It was fantastic to be joined by over 120 guests from across the tech ecosystem for our AI Decoded event last week, where we spent half a day discussing the opportunities and challenges of artificial intelligence (AI).
How are businesses deploying AI?
After a short welcome by our AI chatbot, Litium TW, our first panel looked at AI in practice. Partner Chris Jeffery was joined by Lee Cottle, Interim Chief Executive Officer, Humanising Autonomy; Suzanna Temple-Morris, Head of Legal, Activision (King); and Mike Stevenson, Principal, Amazon Web Services, to discuss use cases for AI.
With 96% of attendees polled indicating they've already started using AI in their business, it was useful to hear how our panel have implemented it, the biggest challenges they've experienced and where they see AI technology going next.
Several recommendations for implementing AI emerged during the panel:
- The AI 'toolbox' has been filling up with useful tools over the past few years (with a number of significant developments since 2022). These solutions aren't one size fits all, and you should ensure you're using the right tool for the right job. Focus on the use case and then identify which tool fits best.
- The quality of the data the AI is trained on is crucial. If you put garbage in, you'll get garbage out. Training data must also be specific and relevant to support the use case of the problem you're trying to solve.
- If you're going to use generative AI, though free tools will be attractive from a cost perspective, you should carefully review the terms of licences and explore whether enterprise tools might be better as they offer more protection.
AI and liability, intellectual property and employment
Next up attendees were given a choice between three breakout sessions covering recent AI developments related to product liability, intellectual property (IP) and employment.
- Navigating liability: In this session, Partners Katie Chandler and Philipp Behrendt discussed who's liable when AI fails, new risks from AI products and regulatory developments concerning AI and product liability.
- AI innovations in intellectual property: In this session, Partners Xuyang Zhu and Gregor Schmid discussed open questions concerning AI and IP infringement, pending legal cases in the EU and UK, and the AI Act and copyright.
- Navigating the human element: In this session, Partners Paul Callaghan and Helen Farr discussed AI's impact on individuals, covering topics including employment use cases, risk, confidentiality and the regulatory landscape.
The AI outlook and investment trends
For our second panel Partner Josef Fuss was joined by Eze Vidra, Managing Partner, Remagine Ventures; David Martínez Rego, Co-founder, DataSpartan; and Zoe Qin, Investor, Dawn Capital, to discuss recent AI investment trends. In a wide-ranging conversation our panel covered:
- what types of companies are seeing investment and the increase in companies exaggerating their AI capabilities to attract funding, also known as 'AI washing'
- the types of deals we saw in Q1 and the dominant position of the US in M&A activity
- where we are on the Gartner hype cycle – with our audience fairly evenly split between 43% believing it's still an exciting time to invest in AI, and 48% believing we're in a levelling off period, and that it makes sense to wait to see how the market develops
- reasons why AI implementation by companies has been slow but is now picking up speed
- the different approaches US and EU regulators have taken concerning AI and why the US's more flexible approach may give it the edge.
Navigating AI governance and regulation
In our final panel of the day Of Counsel Jo Joyce was joined by Sophia Ignatidou, Group Manager for Artificial Intelligence Policy, the Information Commissioner's Office; Alice Tickle, Global Data Privacy Lead, Revantage; and Fritz-Ulli Pieper, Partner, Taylor Wessing Germany, to discuss the challenges in balancing innovation with responsibility when it comes to AI governance.
Topics covered included:
- what AI governance means in the wider context of AI
- how the ICO plans to approach regulation and remain agile in the face of rapidly evolving AI technologies
- the importance of getting senior buy-in for AI governance and why it should be factored into budgets for AI projects.
It was reassuring to learn that 77% of attendees polled already had an AI governance policy or framework in place. For those that didn’t, our panellists provided their views on where smaller SMEs should start with AI compliance which included:
- establishing a cross-disciplinary team, with legal working closely with tech to identify risks
- education and training on what AI governance means
- focusing on what the problem you need to solve is and making it clear that any use of technology comes with trade-offs.