6 February 2025
AI algorithms are often opaque 'black boxes'. Assessing accountability for decision-making processes and outcomes is necessary to protect deal value, but it is not straightforward. How can tech buyers and sellers ensure concerns around AI transparency, explainability, bias and discrimination don’t derail deals?
The explosion of generative AI brings a sharper focus on responsible AI. As AI systems become more intricate and pervasive, technology businesses face greater scrutiny regarding transparency, accountability and bias mitigation.
Whilst regulators primarily rely on existing antidiscrimination laws to tackle AI bias, they are starting to demand greater accountability. Authorities in the EU and US are targeting high-risk uses of AI and requiring developers to publish impact assessments and reports, such as the Algorithmic Accountability Act (2019) in the US.
The ability to interpret and explain AI decisions is crucial to fostering users' trust and preparing for future legislation. In a recent YouGov survey, nearly two-thirds of UK consumers want businesses and brands to be transparent about their use of AI. However, a lack of transparency and explainability in algorithms brings growing concern about the potential for bias and discrimination, particularly in hiring, lending and criminal justice.
Nascent regulation and limited oversight in the development of AI are also problematic. Compared to other technologies, new AI algorithms and applications are not subject to rigorous testing before launch under current regulatory frameworks, which increases the risk of unintended consequences or harm if algorithms are biased or faulty.
A lack of transparency raises questions about accountability. Who should be responsible if an AI system makes a biased, discriminatory or criminal decision? Should it be the company that created the algorithm, the data scientists who trained it, or the end user who implemented it? Regulators are grappling with these questions and looking at ways to address accountability.
The EU’s position is clear. They are prepared to pursue any company that uses AI to break the law even when no human intervention has occurred. In the US, the Federal Trade Commission has stated that companies must be accountable for their AI systems and take steps to prevent bias and discrimination.
Buyers of AI assets must understand how algorithms make decisions and any potential for bias and discrimination. The EU’s AI Act demands that companies understand the coded algorithmic models and data that make up their AI systems. In the case of a dispute or compliance query, can the seller provide understandable and justifiable reasons for AI actions and outcomes? Buyers could inadvertently acquire a significant liability without demanding AI transparency and explainability.
Jonny Bethell, Partner, Taylor Wessing
AI explainability is crucial in M&A deals. A comprehensive understanding of AI systems is vital to avoid integration challenges, hidden biases or decisions that could lead to regulatory fines or reputational damage.
So how can buyers and sellers be confident that AI decision-making processes and outcomes can withstand ethical scrutiny and the gaze of current and future legislation? How can they ensure an AI asset doesn’t become a liability?
Jonny Bethell, Partner, Taylor Wessing
Responsible AI diligence: a checklist for buyers
Prioritising transparency: a checklist for sellers
by multiple authors