The turning point has arrived: after years of negotiations and countless discussions on the EU AI Act, the final version should finally be adopted in mid-July 2024. From then on, it will only be 24 months before far-reaching obligations take effect for most companies. The AI Act is the world's first comprehensive legal framework for regulating artificial intelligence. Algorithms based on machine or deep learning or on so-called Generative Pre-trained Transformers (GPT or GenAI) have been the source of much hope for some time now, but also of some fears. In this respect, authorities have been issuing a canon of statements for some time now. We have compiled a selection of the most important statements.
Regulatory framework for AI before the AI Act
Artificial intelligence (AI) is increasingly permeating all areas of life and raises complex questions. These range from great opportunities to technological risks to ethical considerations. Due to the processing of personal data or use of copyrighted material, as well as trade secrets, there are many legal challenges. There are also concerns that companies with a large market share may stifle innovation, by investing heavily in AI start-ups.
The implications for the economy, society and individuals have resulted in a comprehensive regulatory landscape on AI systems. In this respect, authorities have recently published several guidelines and recommendations, among other things on data protection, IT security and competition law. These concern both the technical and economic development framework and the use of AI systems.
Overview of statements from authorities on AI
We have compiled these statements, by area, here. They can essentially be divided into the following areas:
Data protection: Data protection authorities address the interaction between data protection requirements and AI technologies. This often involves transparency of data processed by AI systems, as well as data minimization and other basic principles, such as under the GDPR.
Cybersecurity: In this respect, guidelines focus primarily on how AI systems can be secured against threats. In particular, this involves guidelines for identifying security risks and mitigation strategies for attacks on AI applications in critical infrastructures.
Competition law: Competition regulators investigate the extent to which AI systems, or their use can promote innovation or influence market practices that are relevant under antitrust law. The assessment of such cooperations shows the effort to ensure fair competition involving modern technologies.
These regulatory measures are intended to ensure that AI systems are developed and used ethically, trustworthily and in accordance with legal requirements. It is to be expected that authorities will continue to build on the principles developed and take them into account in their decisions even after the AI Act has been passed. Furthermore, the previous official statements are likely to continue to apply even after the AI Act comes into force.
Outlook on the AI Act
The AI Act will have far-reaching implications for both the development and use of AI. However, the practical consequences for providers and users are still unclear in many areas. Initial publications by authorities such as the "NIST AI RMF Generative AI Profile" or the guidelines of the French data protection supervisory authority (CNIL) on the use of AI systems provide initial indications. But the AI Act will also be further substantiated in the future by voluntary codes of conduct, guidelines and delegated acts. In particular, the AI Act provides for the following:
- Voluntary codes of conduct: These will be promoted by the AI Office and EU Member States with the aim of considering the objectives and requirements of the AI Act on a voluntary basis. It remains to be seen what the practical relevance of these will be, given the large number of obligations in the AI Act.
- Commission guidelines: These will be more relevant for practical implementation of the actual obligations. They will provide valuable information on, among other things, the scope of prohibited systems, the precise obligations of providers of high-risk systems and the practical implementation of transparency obligations. To date, there is still a great deal of legal uncertainty in these areas.
- Practical guidelines for general AI models: These will be developed by the AI Office for the specific obligations and are intended to help with risk assessments and the preparation of transparency reports by providers.
- Guidelines on transparency: These will be issued by the AI Office and shall facilitate the identification and labelling of artificially generated or manipulated content.
More about the AI Act
We offer a variety of content that addresses current developments on AI. It covers important topics such as data protection, copyright, employment law, IT security and competition law and provides practical guidelines for companies and developers. Please find an overview here. The next episodes of our international AI webinar "Tech Me Up!" will take place on July 9 and September 19. The next episode will deal with the issue of AI Act governance. After the summer break, we will have a data protection authority as a guest. You can register here for our webinar.