Regulator | Link | Description |
---|---|---|
European Data Protection Board (EDPB) 27 June 2024 |
AI Risks: Optical Character Recognition and Name Entity Recognition |
This initiative by the European Data Protection Board (EDPB) under the "Support Pool of Experts (SPE) Programme" focuses on identifying and assessing data protection risks associated with the use of artificial intelligence (AI) technologies. The project specifically targets Optical Character Recognition (OCR) and Named Entity Recognition (NER) systems. By offering comprehensive tools and guidance, it aims to aid data controllers in evaluating the privacy implications of these AI applications. The project provides three downloadable PDF documents that detail specific privacy risks related to OCR and NER technologies and offer actionable insights for mitigating these risks. |
European Data Protection Board's (EDPB) 27 June 2024 |
AI Auditing | The AI Auditing project, part of the European Data Protection Board's (EDPB) "Support Pool of Experts (SPE) Programme," provides a robust methodology and checklist for auditing AI algorithms. This project is designed to support data protection authorities in thoroughly evaluating AI systems, ensuring they comply with data protection regulations. By offering two downloadable PDF documents, it equips regulators and data controllers with the necessary tools to conduct systematic and effective audits of AI technologies, fostering greater accountability and transparency in their deployment. This resource is essential for the development of comprehensive AI guidelines, providing detailed methodologies and checklists for robust AI system evaluations. |
European Data Protection Supervisor (EDPS) 3 June 2024 |
First EDPS Orientations for EUIs using Generative AI | This document from the European Data Protection Supervisor (EDPS) provides initial guidelines and orientations for EU institutions using generative AI technologies. It outlines the key data protection principles and compliance requirements that these institutions must adhere to when deploying generative AI. The guidelines emphasize the importance of transparency, accountability, and the protection of personal data, aiming to ensure that the use of generative AI aligns with EU data protection laws and respects individuals' privacy rights. |
European Data Protection Board (EDPB) 24 May 2024 |
TaskForce ChatGPT Report | The report investigates data protection issues related to ChatGPT, providing insights into compliance challenges and suggesting regulatory measures. The findings aim to guide future AI governance. |
Regulator | Link | Description |
---|---|---|
Hamburg Commissioner for Data Protection and Freedom of Information (HmbBfDI) 15 July 2024 |
Data Protection and Large Language Models |
The Hamburg Commissioner for Data Protection and Freedom of Information (HmbBfDI) published a discussion paper on the relationship between the GDPR and Large Language Models (LLMs). The paper explores whether LLMs store personal data and aims to support companies and authorities with data protection issues. It distinguishes between LLMs as an AI model and as part of an AI system. Main points:
|
Hamburg Commissioner for Data Protection and Freedom of Information (HmbBfDI) 12 June 2024 |
AI Training with Personal Data on Instagram and Facebook | This news release from the Hamburg Commissioner for Data Protection and Freedom of Information addresses the use of personal data for AI training by Instagram and Facebook. It raises concerns about the privacy implications of using personal data without adequate consent and transparency. The document highlights the need for social media platforms to comply with data protection regulations and to implement robust measures to protect user privacy when developing AI models. |
Hamburg Commissioner for Data Protection and Freedom of Information (HmbBfDI)
11 June 2024 |
Information on Applicant Data Protection and Recruiting | This information sheet from the Hamburg Commissioner for Data Protection and Freedom of Information provides guidelines for protecting personal data during the recruitment process. It also adresses AI applications in the recruitment process and emphasizes the importance of transparency, data minimization, and securing applicant data. The document offers practical advice for organizations to comply with data protection laws while handling personal data of job applicants, ensuring their privacy is respected throughout the recruitment process. |
The State Commissioner for Data Protection Lower Saxony (LfD Niedersachsen) 6 June 2024 |
Remarks on ChatGPT in Activity Report | The activity report includes on page 52 an analysis of ChatGPT's compliance with data protection laws, highlighting concerns about data processing practices and offering recommendations. |
Data Protection Conference (DSK) 6 May 2024 |
OH AI and Data Protection, V1.0 | This document provides guidelines on how AI systems should adhere to data protection regulations, including principles for lawful data processing and safeguarding individual privacy. The guidelines aim to ensure AI developments respect privacy rights. |
The Bavarian State Commissioner for Data Protection (LfD Bayern) 16 April 2024 |
Technical and Organizational Measures for Artificial Intelligence Systems | This document from the Bavarian Data Protection Authority outlines the necessary technical and organizational measures for ensuring the compliance of artificial intelligence (AI) systems with data protection regulations. It provides detailed guidance on implementing security controls, data minimization, transparency, and accountability. The document aims to assist organizations in developing and deploying AI systems that adhere to legal requirements and protect individual privacy effectively. |
Bavarian State Office for Data Protection Supervision (BayLDA) 24 January 2024 |
Checklist for Artificial Intelligence (AI) Systems | This document from the Bavarian Data Protection Authority provides a comprehensive checklist for assessing the compliance of AI systems with data protection regulations. It covers key areas such as data processing practices, user consent, data security, transparency, and accountability. The checklist aims to guide organizations in ensuring that their AI systems adhere to legal requirements, uphold data protection standards, and protect user privacy effectively. |
Data Protection Authority of Baden-Württemberg (LfDI Baden-Württemberg) 23 November 2023 |
Legal Foundations for Data Protection and Artificial Intelligence | This document from the Data Protection Authority of Baden-Württemberg outlines the legal foundations and regulatory framework for data protection in the context of artificial intelligence (AI). It provides detailed guidance on the application of data protection laws to AI technologies, emphasizing the importance of transparency, accountability, and the protection of individual rights. The document serves as a resource for organizations to ensure their AI systems comply with legal requirements and uphold data privacy standards. |
Hamburg Commissioner for Data Protection and Freedom of Information (HmbBfDI) 13 November 2023 |
Checklist for Large Language Model (LLM) Chatbots | The document from the Hamburg Commissioner for Data Protection and Freedom of Information provides a checklist for assessing the data protection compliance of large language model (LLM) chatbots. It covers crucial aspects such as data processing practices, user consent, data security, and transparency. The checklist aims to guide developers and operators in ensuring that their LLM chatbots adhere to legal requirements and effectively protect user privacy. |
Hessian Commissioner for Data Protection and Freedom of Information (HBDI) 19 April 2023 |
Questionnaire of the Hesse Commissioner for Data Protection and Freedom of Information on ChatGPT | The document from the Hesse Data Protection Authority provides a comprehensive questionnaire designed to assess the compliance of ChatGPT with data protection regulations. It addresses key aspects such as data processing practices, user consent, data security measures, and transparency in AI operations. The aim is to ensure that ChatGPT and similar AI systems adhere to legal requirements and effectively protect user privacy. |
Data Protection Conference (DSK) 6 November 2019 |
DSK position paper on recommended technical and organisational measures for the development and operation of AI systems | The position paper outlines the key considerations and guidelines for the ethical and legal use of artificial intelligence (AI) in various sectors. It emphasizes the necessity of transparency, accountability, and the protection of fundamental rights in the deployment of AI systems. The document also highlights the importance of ensuring that AI technologies comply with data protection regulations and advocates for the implementation of measures to mitigate potential risks associated with AI applications. |
Data Protection Conference (DSK) 3 April 2019 |
Hambach Declaration | The document, known as the "Hambach Declaration," provides a comprehensive overview of the principles and recommendations for the ethical and lawful use of artificial intelligence (AI) and automated decision-making systems. It emphasizes the importance of transparency, accountability, and fairness in AI systems, advocating for robust data protection and privacy measures. The declaration calls on policymakers, developers, and users to adhere to these principles to ensure that AI technologies respect human rights and democratic values. |
Regulator | Link | Description |
---|---|---|
National Commission for Information Technology and Civil Liberties (CNIL) 10 June 2024 |
Artificial intelligence: new public consultation on the development of AI systems | This document from the French Data Protection Authority (CNIL) announces the launch of a new public consultation on artificial intelligence (AI). The consultation seeks to gather feedback on several key aspects: the legal basis of legitimate interest for developing AI systems, focusing on the dissemination of open-source models and web scraping practices; informing and respecting the rights of individuals affected by AI systems; annotating data; and ensuring the security of AI system development. The aim is to develop comprehensive guidelines that ensure AI technologies are transparent, secure, and compliant with data protection laws, while facilitating the responsible use of AI. |
National Commission for Information Technology and Civil Liberties (CNIL) 8 April 2024 |
AI: CNIL publishes its first recommendations on the development of artificial intelligence systems | This document provides recommendations for the development and use of AI systems, emphasizing transparency, accountability, and data minimization. The guidelines are intended to help organizations comply with data protection laws. |
National Commission for Information Technology and Civil Liberties (CNIL) 19 July 2022 |
Position on "Augmented" Cameras | CNIL outlines guidelines for the use of augmented cameras in public spaces, addressing privacy concerns and permissible uses. It seeks to balance security needs with individual privacy rights. |
Regulator | Link | Description |
---|---|---|
Italian Data Protection Authority (Garante per la protezione dei dati personali) 20 May 2024 |
Guidelines on the Use of Artificial Intelligence for the Protection of Personal Data |
This document from the Italian Data Protection Authority (Garante per la protezione dei dati personali) provides guidelines on the use of artificial intelligence (AI) with a focus on protecting personal data. It outlines the regulatory requirements and best practices for ensuring AI systems comply with data protection laws. The guidelines emphasize transparency, accountability, and the protection of individuals' rights, aiming to promote the ethical and lawful deployment of AI technologies. |
Italian Data Protection Authority (Garante per la protezione dei dati personali) 29 January 2024 |
ChatGPT: Italian DPA notifies breaches of privacy law to OpenAI | This mandate requires OpenAI to make specific data protection improvements within 30 days, focusing on enhancing transparency and user consent mechanisms. The goal is to align OpenAI's practices with Italian data protection laws. |
Italian Data Protection Authority (Garante per la protezione dei dati personali) 31 March 2023 |
Artificial intelligence: stop to ChatGPT by the Italian SA | The Italian Data Protection Authority blocked OpenAI for non-compliance with privacy regulations, based on data processing practices deemed unlawful. The document outlines the reasons and required actions for compliance. |
Regulator | Link | Description |
---|---|---|
Austrian Data Protection Authority (DSB) 27 May 2024 |
Announcements on AI in the Private Sector | This document from the Austrian Data Protection Authority provides official announcements and guidelines on the use of artificial intelligence (AI) in the private sector. It addresses the legal and ethical considerations for deploying AI technologies, emphasizing compliance with data protection laws. The guidelines offer practical advice on ensuring transparency, accountability, and the protection of personal data, aiming to help organizations navigate the complexities of AI implementation while safeguarding individual privacy rights. |
Austrian Data Protection Authority (DSB) 27 May 2024 |
Announcements on AI in the Public Sector | This document from the Austrian Data Protection Authority provides official announcements and guidelines on the use of artificial intelligence (AI) in the public sector. It addresses the regulatory and ethical considerations for deploying AI technologies in government and public services, emphasizing compliance with data protection laws. The guidelines offer practical advice on ensuring transparency, accountability, and the protection of personal data, aiming to assist public sector organizations in implementing AI responsibly and effectively while safeguarding citizens' privacy rights. |
Austrian Data Protection Authority (DSB) 25 April 2024 |
FAQ on AI and Data Protection | This document from the Austrian Data Protection Authority provides a comprehensive FAQ addressing common questions related to artificial intelligence (AI) and data protection. It covers various aspects of AI deployment, including compliance with data protection laws, risk mitigation, transparency, and user rights. The FAQ aims to guide organizations in understanding and implementing best practices for data protection in AI systems, ensuring adherence to legal requirements and safeguarding individual privacy. |
Regulator | Link | Description |
---|---|---|
Belgian Data Protection Authority (Gegevensbeschermingsautoriteit) 15 March 2024 |
Decision on the Merits No. 46/2024 | This document from the Belgian Data Protection Authority (Gegevensbeschermingsautoriteit) presents a decision on the merits regarding a specific data protection case. It outlines the findings, legal considerations, and conclusions related to the case, providing detailed reasoning and guidance on compliance with data protection regulations. The decision serves as a precedent and offers insights into the Authority's interpretation and enforcement of data protection laws. |
Regulator | Link | Description |
---|---|---|
Danish Data Protection Authority (Datatilsynet) 22 May 2024 |
New Templates for Conducting Data Protection Impact Assessments | This announcement from the Danish Data Protection Authority introduces new templates for conducting Data Protection Impact Assessments (DPIAs). The templates are designed to help organizations systematically identify and mitigate privacy risks associated with their data processing activities. By providing structured guidance and best practices, these templates aim to ensure that organizations comply with data protection regulations and effectively protect personal data throughout the lifecycle of their AI and other data-intensive projects. |
Danish Data Protection Authority (Datatilsynet) 19 January 2024 |
Publication of Dataset and AI Model | This decision by the Danish Data Protection Authority addresses the legal considerations and requirements for the publication of datasets and AI models. It highlights the importance of ensuring data privacy and protection when releasing datasets and AI models to the public. The document outlines specific guidelines and compliance measures that organizations must follow to safeguard personal data, emphasizing transparency, consent, and accountability in the use and dissemination of AI technologies. |
Regulator | Link | Description |
---|---|---|
Dutch Data Protection Authority (Autoriteit Persoonsgegevens) 11 June 2024 |
AP and RDI: Supervision of AI Systems Requires Collaboration and Must Be Arranged Quickly |
This announcement from the Dutch Data Protection Authority (Autoriteit Persoonsgegevens) and the Radiocommunications Agency Netherlands (RDI) emphasizes the urgent need for collaboration in supervising AI systems. It highlights the importance of establishing effective oversight mechanisms to ensure that AI systems comply with data protection laws and operate transparently and ethically. The document calls for swift action to coordinate efforts among regulatory bodies to address the complexities and risks associated with AI technologies. |
Dutch Data Protection Authority (Autoriteit Persoonsgegevens) 1 May 2024 |
Guidance on Scraping by Individuals and Private Organizations | This document from the Dutch Data Protection Authority (Autoriteit Persoonsgegevens) provides guidance on the practice of data scraping by individuals and private organizations. It outlines the legal considerations and data protection requirements associated with scraping publicly accessible data. The guidance emphasizes the importance of complying with data protection laws, ensuring transparency, and protecting individuals' privacy rights when engaging in data scraping activities. |
Regulator | Link | Description |
---|---|---|
Spanish Data Protection Agency (AEPD) 20 September 2022 |
Guidelines on Machine Learning | The guidelines explain how to use data sets in machine learning in compliance with GDPR, covering data minimization, consent, and transparency. The aim is to help organizations develop AI that respects data protection laws. |
Regulator | Link | Description |
---|---|---|
Swedish Data Protection Authority (IMY) 18 June 2024 |
First Interim Report from the AI Regulatory Sandbox Pilot Project Published |
This news release from the Swedish Data Protection Authority (IMY) announces the publication of the first interim report from its AI regulatory sandbox pilot project. The report provides insights into the challenges and opportunities of regulating AI technologies, based on practical experiences from the sandbox participants. It aims to inform future regulatory approaches and promote best practices for ensuring compliance with data protection laws in AI development. |
Swedish Data Protection Authority (IMY) 27 February 2024 |
Guidelines on AI and GDPR | The guidelines stipulate how to comply with GDPR in AI development, including requirements for data minimization, transparency, and user rights. The aim is to ensure AI applications respect privacy regulations. |
Regulator | Link | Description |
---|---|---|
Information Commissioner's Office (ICO) 19 June 2024 |
Snap My AI: Non-Confidential Decision | This document from the UK's Information Commissioner's Office (ICO) presents a non-confidential version of the decision regarding the Snap My AI chatbot. It details the ICO's findings on data protection compliance, highlighting issues related to user consent, data security, and transparency. The decision includes recommendations for improving the chatbot's practices to ensure they align with data protection regulations and protect user privacy. |
Information Commissioner's Office (ICO) 10 June 2024 |
ICO consultation series on generative AI and data protection | ICO | This document series from the UK's Information Commissioner's Office (ICO) involves consultations on generative AI and data protection. It seeks input from stakeholders to explore the implications, challenges, and benefits of generative AI technologies. The consultations aim to develop robust regulatory guidelines and ensure that the deployment and use of generative AI comply with data protection laws, safeguarding privacy and promoting transparency and accountability in AI systems. |
Information Commissioner's Office (ICO) 21 May 2024 |
ICO warns organisations must not ignore data protection risks as it concludes Snap ‘My AI’ chatbot investigation | This news release from the UK's Information Commissioner's Office (ICO) concludes the investigation into the Snap My AI Chatbot, highlighting significant data protection risks. The ICO emphasizes the necessity for organizations to prioritize data protection compliance in AI technologies. The document provides insights into the investigation's findings and reinforces the importance of transparency, user consent, and robust data security measures in AI deployments. |
Information Commissioner's Office (ICO) 10 May 2024 |
Inquiry on Generative AI Accuracy | The inquiry investigates the accuracy principles for generative AI models, focusing on the relationship between training data accuracy and AI outputs. The aim is to ensure AI systems produce reliable results. |
Information Commissioner's Office (ICO) 15 March 2023 |
Guidance AI and Data Protection | The guidance document from the UK's Information Commissioner's Office (ICO) provides comprehensive advice on the application of data protection laws to artificial intelligence (AI). It emphasizes the importance of transparency, accountability, and data minimization in AI systems. The document offers practical recommendations for organizations on how to ensure their AI technologies comply with the UK GDPR, including guidance on data protection impact assessments, data subject rights, and managing the risks associated with AI. |
Information Commissioner's Office (ICO) 18 June 2021 |
Opinion on Live Facial Recognition | The ICO's opinion addresses the use of live facial recognition technology in public spaces, emphasizing the need for necessity, proportionality, and fairness. The document aims to guide organizations in lawful use. |
Regulator | Link | Description |
---|---|---|
Office of the Privacy Commissioner (OPC) 7 December 2023 |
Principles for Responsible AI | The document outlines principles for responsible and privacy-protective AI development, covering legal authority, data minimization, and user transparency. The guidelines aim to help organizations develop AI that respects privacy rights. |
Office of the Privacy Commissioner (OPC) 5 April 2023 |
Canadian privacy commissioner to probe ChatGPT | The Privacy Commissioner of Canada announced a review into ChatGPT's compliance with data protection laws, aiming to ensure that the AI application respects user privacy. The findings will guide future regulatory actions. |
Regulator | Link | Description |
---|---|---|
G7 Data Protection Authorities 21 June 2023 |
Statement on Generative AI | The statement addresses data protection concerns with generative AI technologies, highlighting risks related to privacy and data security. The document calls for collaborative efforts to ensure responsible AI development. |
Regulator | Link | Description |
---|---|---|
Berlin Group 5 June 2024 |
Working Paper on Facial Recognition Technology | This paper outlines the attributes and uses of facial recognition technology (FRT) in both private and public sectors, highlighting the associated privacy and data protection risks, along with mitigation strategies. It provides recommendations for policymakers, controllers, and processors using FRT for public or economic purposes. While focusing on the technical aspects of FRT, the paper also addresses the interaction between technical components and non-technical elements such as policies, regulations, human designers, end users, and subjects, and discusses the distinct risks of facial analytics. |
Regulator | Link | Description |
---|---|---|
Personal Information Protection Commission (PIPC) 7 June 2024 |
Standards for Automated Decisions | The standards address how personal information processors should handle automated decisions, including requirements for transparency and user rights. The guidelines aim to protect individuals in AI-driven decision processes. |
Personal Information Protection Commission (PIPC) 30 May 2024 |
Synthetic Data Generation Model | This document introduces models for generating synthetic data for AI development, aiming to enable data usage without infringing on privacy. The guidelines ensure that synthetic data retains the characteristics of real data while being safe to use. |
Personal Information Protection Commission (PIPC) 28 March 2024 |
Recommendations on AI Compliance | Following an investigation, PIPC issued recommendations for AI service providers, focusing on compliance with personal data protection laws. The recommendations aim to rectify identified non-compliance issues. |
Personal Information Protection Commission (PIPC) 27 March 2024 |
Prior Appropriateness Review System | This system assists businesses in complying with data protection laws during AI development, involving a detailed review of data processing environments. The system includes confidentiality policies to protect the review results. |
Personal Information Protection Commission (PIPC) 3 August 2023 |
Guidance on AI and Personal Information | The guidance provides rules for the safe use of personal information in AI, including principles for data minimization and user consent. The document aims to ensure AI applications comply with data protection laws. |
Personal Information Protection Commission (PIPC) 31 May 2021 |
AI Personal Information Protection Checklist | The checklist enhances awareness among AI developers about data protection principles, including guidelines on lawful processing, transparency, and user rights. The document aims to ensure ethical AI development. |
Regulator | Link | Description |
---|---|---|
Saudi Data and AI Authority (SDAIA)
14 September 2023 |
AI Ethics Principles |
These principles establish standards for ethical AI development and use, covering fairness, privacy, and transparency. The guidelines aim to ensure AI technologies benefit society while respecting rights. |
Regulator | Link | Description |
---|---|---|
Personal Data Protection Commission (PDPC) 1 March 2024 |
Guidelines on Personal Data in AI | The guidelines cover the use of personal data in AI systems, providing exceptions for business improvement and research while ensuring user consent. The aim is to help organizations comply with the PDPA. |
Regulator | Link | Description |
---|---|---|
Federal Data Protection and Information Commissioner (FDPIC) 4 April 2023 |
Guidelines on ChatGPT and AI | These guidelines outline data protection requirements for ChatGPT and similar AI applications, emphasizing transparency and user consent. The document aims to ensure compliance with Swiss data protection laws. |
Regulator | Link | Description |
---|---|---|
Turkish Personal Data Protection Authority (KVKK) 15 September 2021 |
Recommendations on AI Data Protection | The recommendations provide guidelines for data protection in AI development, emphasizing impact assessments, anonymization, and user consent. The aim is to ensure ethical and lawful AI practices. |
Regulator | Link | Description |
---|---|---|
Federal Office for Information Security (BSI) 14 April 2023 |
AI Security Concerns in a Nutshell | This guideline outlines the security risks associated with AI technologies, detailing various types of attacks on machine learning systems and suggesting mitigation strategies. The goal is to enhance the security of AI applications in critical sectors. |
Regulator | Link | Description |
---|---|---|
National Institute of Standards and Technology (NIST)
29 April 2024 |
NIST Special Publication 1270: Toward a Standard for Identifying and Managing Bias in Artificial Intelligence | This document addresses the challenge of AI bias, providing initial guidance for identifying and managing bias. It aims to develop detailed socio-technical guidance for mitigating bias in AI systems. |
National Institute of Standards and Technology (NIST) 29 April 2024 |
NIST SP 800-218A: Secure Software Development Practices for Generative AI and Dual-Use Foundation Models | This companion resource to SP 800-218 addresses concerns with generative AI systems, focusing on securing software from malicious training data. It expands the Secure Software Development Framework (SSDF) to ensure AI system performance is not adversely affected. |
National Institute of Standards and Technology (NIST) 29 April 2024 |
NIST AI 600-1: AI RMF Generative AI Profile | This document helps organizations identify unique risks posed by generative AI, proposing actions for risk management. Developed with input from over 2,500 members of the NIST generative AI public working group, it centers on 13 risks and 400+ actions for developers. |
National Institute of Standards and Technology (NIST) 29 April 2024 |
NIST AI 100-5: A Plan for Global Engagement on AI Standards | This plan drives the global development and implementation of AI-related standards, promoting cooperation, coordination, and information sharing. It involves multidisciplinary stakeholders and aligns with the National Standards Strategy for Critical and Emerging Technology. |
National Institute of Standards and Technology (NIST) 29 April 2024 |
NIST AI 100-4: Reducing Risks Posed by Synthetic Content | This publication informs on methods for detecting, authenticating, and labeling synthetic content, including digital watermarking and metadata recording. It outlines current methods and areas for further research to verify content authenticity. |
President of the United States 30 October 2023 |
Executive Order 14110 of October 30, 2023 | The EO 14110 charges multiple agencies, including NIST, with developing guidelines for generative AI and secure software development, launching initiatives for AI capability evaluation, and creating global engagement plans. The aim is to ensure safe, secure, and trustworthy AI development and use. |
Regulator | Link | Description |
---|---|---|
European Commission 17 May 2024 |
Commission compels Microsoft to provide information under the Digital Services Act on generative AI risks on Bing | This news article from the European Commission details the action taken to compel Microsoft to provide information regarding the risks associated with generative AI under the Digital Services Act. It highlights the regulatory efforts to ensure transparency and accountability in the deployment of generative AI technologies, addressing potential risks to privacy, security, and compliance with EU regulations. The Commission's action aims to safeguard user rights and promote responsible AI development. |
Body of European Regulators for Electronic Communications (BEREC) 15 March 2024 |
Cooperation between Microsoft and OpenAI currently not subject to merger control | BEREC outlines its position on competition dynamics in AI and virtual worlds, addressing issues related to market openness, sustainability, and cybersecurity. The goal is to ensure a competitive and fair digital market. |
Regulator | Link | Description |
---|---|---|
Federal Cartel Office 15 November 2023 |
Cooperation between Microsoft and OpenAI currently not subject to merger control | Microsoft’s involvement in and its cooperation with OpenAI is not subject to merger control in Germany. |
Regulator | Link | Description |
---|---|---|
Competition and Markets Authority (CMA) 16 April 2024 |
Technical Update on AI Models | The report provides a technical update on AI models, highlighting market developments and competition risks. It addresses concerns about the dominance of major tech firms. The findings will guide future regulatory actions. |
Competition and Markets Authority (CMA) 18 September 2023 |
Principles for Competitive AI Markets | The document proposes principles to guide competitive AI markets, emphasizing accountability, transparency, and fair dealing. The goal is to protect consumers and promote innovation in the AI sector. |
Regulator | Link | Description |
---|---|---|
Digital Platform Regulators Forum (DP-REG) 23 November 2023 |
Paper on LLMs and Competition | The working paper explores the intersection of large language models (LLMs) with digital platform services, highlighting competition considerations and potential market concentration. The aim is to inform regulatory approaches to LLMs. |
Regulator | Link | Description |
---|---|---|
Competition Commission of India (CCI) 3 June 2024 |
Market Study on AI and Competition | The study explores the impact of AI on market competition, efficiency, and innovation, aiming to identify potential competition issues and inform regulatory strategies. The findings will guide AI policy development in India. |
von mehreren Autoren
von mehreren Autoren
von Dr. Jakob Horn, LL.M. (Harvard) und Alexander Schmalenberger, LL.B.