4. November 2024
AIQ - Autumn – 6 von 7 Insights
In September 2024, the Belgian Data Protection Authority (BDPA) published an information brochure on AI systems and the GDPR outlining the interplay between the GDPR and the AI Act in the context of AI system development (Guidance).
The Guidance first outlines the criteria to be met to qualify as an AI system under the AI Act:
In some cases, AI systems can also learn from data and adapt over time. Examples of AI systems in daily life include spam filters in emails, recommender systems on streaming services, virtual assistants, and AI-powered medical imaging tools.
The Guidance goes on to tackle the application of the GDPR and the AI Act requirements to AI systems, emphasising how these two pieces of legislation complement and reinforce each other:
The six legal bases under the GDPR remain the same under the AI Act. In addition, the AI Act introduces a prohibition of specific types of high-risk AI systems such as social scoring and real-time facial recognition in public spaces. The GDPR fairness principle is also reinforced by the requirement to mitigate bias and discrimination in the development, deployment, and use of AI systems.
The AI Act complements the GDPR by mandating user awareness when interacting with AI systems, and where high-risk AI systems are concerned, by requiring clear explanations of how data influences the AI decision-making process.
Under the GDPR, data must be collected for specific purposes and limited to what is necessary. The AI Act reinforces these principles, especially for high-risk AI systems, for which the intended purpose must be clearly defined and documented.
The GDPR requires data accuracy, which the AI Act strengthens for high-risk AI systems by requiring the use of high-quality and unbiased data to prevent discriminatory outcomes.
The GDPR limits data storage to what is necessary for the processing (subject to certain exceptions). The AI Act does not add any extra requirements in that respect.
The GDPR allows individuals to challenge solely automated decisions which have a legal or similarly significant effect on them, while the AI Act emphasises proactive meaningful human oversight for high-risk AI systems.
Both the GDPR and the AI Act mandate security measures for data processing. The AI Act highlights unique risks in AI systems, such as bias and manipulation, and requires additional security measures such as identifying and planning for potential problems, continuous monitoring and testing and human oversight throughout the development, deployment, and use of high-risk AI systems.
The GDPR grants individuals rights over their personal data, such as access, rectification, and erasure. The AI Act enhances these rights by requiring clear explanations of how data is used in AI systems.
Both the GDPR and the AI Act stress the importance of organisations demonstrating accountability. For AI systems, this includes risk management, clear documentation on the design and implementation of AI systems, human oversight for high-risk AI systems and incident reporting mechanisms.
Finally, the Guidance shows how to apply all these requirements to a specific use case, namely a car insurance premium calculation system.
4. November 2024
von Benjamin Znaty
4. November 2024
4. November 2024
von Paolo Palmigiano
4. November 2024