4 November 2024
AIQ - Autumn – 2 of 7 Insights
This article was written by Carlos Rivadulla Oliva of ECIJA, our strategic alliance partner firm.
With the conclusion of the EU's AI Act which came into force on 1 August 2024, the EU is at the forefront of regulating artificial intelligence. Businesses operating in the EU must brace themselves for the gradual implementation of all the requirements and obligations under the AI Act which will apply to a greater or lesser degree to all operators in the AI value chain.
Central to the preparation process is the EU AI Pact, also announced on 1 August 2024. This is a non-legislative, voluntary commitment by companies to comply with the principles and future obligations laid out in the AI Act ahead of provisions becoming applicable. This Pact serves as both a soft-landing for businesses to test compliance and as a political move to engage stakeholders early.
The EU AI Pact is significant because it allows businesses to get ahead of the compliance curve. It emphasises collaboration between public and private sectors to address the risks posed by AI technologies. Signatories commit to the ethical use of AI, focusing on ensuring that AI systems are lawful, transparent, and accountable, reflecting the risk-based approach of the AI Act. Although voluntary, participating in the AI Pact sends a strong message of corporate responsibility and readiness for the incoming obligations under the AI Act. On 25 September 2024, the European Commission announced that over 100 companies had signed up including Amazon, Google and Microsoft.
Among the many obligations that companies will face under the AI Act, one stands out as particularly critical: AI transparency. The AI Act divides AI systems into categories based on their risk profiles, with “high-risk” systems subject to the strictest requirements. One of these is the demand for transparency, which means that operators of high-risk AI systems must provide clear information about how their systems function and make decisions.
Transparency is essential for building trust in AI systems and ensuring accountability. The transparency requirements under the AI Act are multifaceted. First, it mandates that users be informed when they are interacting with an AI system rather than a human, especially in cases involving automated decision-making. Second, companies must be able to explain, in layperson’s terms, how the AI system operates, particularly how it processes data and arrives at specific outcomes.
The complexity of many AI systems poses a challenge, particularly in the context of advanced machine learning models like neural networks. Organisations must prioritise not only understanding the technical workings of their AI but also translating these mechanisms into clear and comprehensible terms for regulators, users, and stakeholders. Compliance with transparency requirements will also likely involve documentation and regular audits of AI systems to ensure they are functioning as intended and are aligned with the principles of fairness, accountability, and non-discrimination.
In tandem with the AI Act, the AI Liability Directive (AILD) is intended to play a crucial role in harmonising the legal landscape for AI across the EU. The AILD is designed to establish clear rules regarding liability for damage caused by AI systems. It focuses on facilitating claims for those harmed by AI, making it easier to prove causality and liability in cases involving complex AI systems.
The European Parliament and EU Council agreed the text of the AILD in December 2023, nearly a year ago, but there are suggestions that progress has stalled and the current version may yet be significantly amended or withdrawn altogether.
The European Parliament's JURI committee is expected to make a decision shortly as to whether or not to proceed with the Directive as it stands following an impact assessment by the European Parliamentary Research Committee, published in September 2024, which called for changes amidst concerns that the AILD overlapped too much with the AI Act and the recently agreed revised Product Liability Directive. The Research Committee's recommendations include that this legislation should be a Regulation rather than a Directive, that the focus be more on software liability with the scope extended to non-AI software in order to align with the revised Product Liability Directive, and that there should be extensions to certain areas of liability and damages claims.
As the legislative landscape continues to evolve, organisations must stay agile and informed, actively preparing for both the AI Act and, potentially the AI Liability Directive, to mitigate risks and capitalise on the benefits of compliant AI innovation.
4 November 2024
4 November 2024
by Sean Nesbitt and Marc André Gimmy