4 novembre 2024
AIQ - Autumn – 3 de 7 Publications
Several major tech companies have recently postponed the release of new AI features and services in the EU. In almost all cases, the press has cited the legal challenges these companies face in ensuring compliance with the latest EU regulations before launching their AI innovations. But could there be more strategic reasons at play?
Apple’s decision to delay the release of its 'Apple Intelligence' AI features in France and across the EU was attributed to "regulatory uncertainties" stemming from the Digital Markets Act (DMA), in an article published by The Verge. These AI capabilities will be rolled out gradually worldwide, with EU countries being among the last to gain access. Apple reportedly had concerns about the DMA's interoperability requirements, which could force the company to open its ecosystem. While Apple is said to be working with the European Commission to ensure these features are introduced without compromising user safety, the actual link between delaying the launch of Apple Intelligence in Europe and addressing these concerns remains unclear.
This decision to delay the launch of AI capabilities in the EU is by no means unprecedented. In early October 2024, OpenAI introduced its highly anticipated 'ChatGPT Advanced Voice Mode' in the UK but chose not to release it in EU countries. Reports indicate that OpenAI attributed this decision to having to comply with EU regulations, specifically the EU AI Act. The press highlights Article 5 of the EU AI Act, which prohibits the use of AI systems for inferring emotions, however, Aritcle 5 only applies to the use of this type of AI within " areas of workplace and educational institutions," leaving the connection between Article 5 of the AI Act and this new ChatGPT feature somewhat ambiguous. Perhaps for this reason, in an October 22nd tweet, OpenAI did, finally announce its decision to rollout the feature across the EU.
The GDPR is also regularly cited as potential a stumbling block to AI development in the EU. In June 2024, Meta held its developer conference where it announced upgrades to its Llama AI product would not be possible for the time being in Europe. In a public statement, Meta explicitly stated that its delay was related to GDPR compliance issues, particularly in light of scrutiny from the Irish Data Protection Commission (DPC). According to Meta, requests made by the DPC hindered the training of its large language model, which relies on public content shared on Facebook and Instagram. While Meta has made the pause of its use of EU data to train its AI model permanent in the EU, it has resumed these processing activities in the UK, where the ICO continues to maintain a watching brief but has not so far required Meta to cease the processing.
This was not the first time Meta has run into regulator scrutiny over its use of AI. Three years ago, it announced it would cease using facial recognition technology for tagging purposes on Facebook in light of privacy concerns. On 21 October 2024, however, it said it was planning to start using facial recognition again to verify user identity, help recover hacked accounts and detect and block some types of scam ads. Interestingly, Meta said it would not be testing facial recognition for identity verification purposes in the EU, UK and in the US states of Texas and Illinois, jurisdictions in which Meta is continuing to have conversations there with regulators. Meta’s vice president for content policy is reported to have said that “European regulatory environment can sometimes slow down the launch of safety and integrity tools like this. We want to make sure we get it right in those jurisdictions".
Whichever EU regulatory framework is cited in the above cases — the DMA for Apple, the AI Act for OpenAI, or the GDPR for Meta — the outcome is that EU consumers may experience short-term delays in accessing innovative AI technologies. Looking at the longer term prospects though, these regulatory frameworks arguably present an opportunity for tech businesses. While it's true that businesses may need to postpone releases of new AI technologies and features, as Meta has indicated, these organisations will be working to ensure that their products meet EU regulatory requirements while also preserving their commitment to user privacy and data security in a complex regulatory landscape. Creating customer trust will be fundamental to take up so taking the time to get it right may actually increase profitability which, in turn, will further fund innovation.
Whether or not the EU's approach to regulation leads to enhanced consumers protections at the expense of technological progress in Europe is yet to be determined, but it’s important to recognise the ongoing interaction between big tech corporate strategies and regulatory oversight when launching AI capabilities in Europe.
4 novembre 2024
par Benjamin Znaty
4 novembre 2024
4 novembre 2024
par Paolo Palmigiano
4 novembre 2024