Artificial Intelligence (AI) has become an integral part of business operations across various industries. However, its deployment demands careful legal and strategic considerations. Before integrating AI systems in a company’s workflows, thorough diligence should be conducted. A structured approach involves asking key questions to understand the system's purpose, scope, and compliance risk, followed by considerations as to how to address remaining risks contractually.
Part A: diligence
There are crucial considerations every business should address before using AI systems. The first is whether the AI system will be used in a regulated industry eg as medical device, within a critical infrastructure as this will have an impact on the risk categorisation under the AI Act. Secondly, ask yourself whether the AI system is intended for internal use within the deploying company, for external use by third parties or both. Besides the risk classification of the AI system this second question has a crucial impact on the role of the company, either as provider or as deployer, and by that, the applicable legal requirements under the AI Act.
Internal use
If the AI system is deployed internally within an organisation, the following questions will help you structure the diligence process:
- Purpose: what is the primary purpose of the AI system eg process automation, data analysis, decision support? The specific purpose is not just relevant for the risk categorisation but also for understanding further regulatory regimes that might apply eg GDPR if personal data is processed.
- Access: who has access to the AI system eg is it limited to specific employees or departments or available across an entire corporate group Understanding who will have access to the AI system helps to further define the purpose and to prepare the groundwork for GDPR questions if personal data is processed.
- Corporate group: is the AI system used across multiple entities within a corporate group? This question can become crucial if subsidiaries put their trademark on a high-risk AI system, make a substantial modification to it or change the purpose of a general-purpose AI system. In these circumstances the deployer will become the provider of the AI system with all the obligations that come with this role under the AI Act. So, if you are adopting the AI system within a corporate group think about do’s and don'ts for the use of the AI system by the multiple entities.
- Automated decision making: Does the AI system involve automated decision-making? If so, can the decision be overridden by human intervention?
Keep in mind that high risk AI systems require human oversight. And, if personal data is processed, additional requirements will apply from a GDPR perspective as in these scenarios the data subject’s consent in most cases will be required.
External use
For AI systems deployed (also) externally, additional legal and operational questions will help you navigate through the diligence process:
- Standalone/integrated: is the AI system standalone, eg chatbot, analytics tool, or integrated into another product or service eg AI-enhanced SaaS? If the AI system is integrated into other products or systems be aware that this may trigger a different risk categorisation under the AI Act.
- AI as a service: is the AI system or the underlying model offered as a service? If the AI system or the AI model is part of the company’s service package, this will have an impact on the privacy setup and for the contractual terms regarding ownership and liability.
Key considerations diligence
Deploying AI requires careful legal diligence. Companies must ask essential questions regarding the system’s scope, purpose, regulatory implications, and access controls. A structured diligence process should put a company in the position to define the risk category of the AI system and to determine its role as deployer or as a new provider.
Part B: contract clauses
When negotiating AI-related contracts, specific clauses should address vendor obligations, compliance and risk management:
- Obligations for the vendor (provider): are there clear operational guidelines from the vendor regarding the AI systems’ deployment? Does the contract specify or define the provider’s obligations for high risk AI systems? Is the contract transparent regarding the datasets and algorithms used for training, their validation and development? Are there clauses included for correct and unbiased data output?
- Compliance with GDPR and cybersecurity regulations: does the contract require clauses regarding data protection laws and/or national cybersecurity regulations eg no training of algorithms on the deployer’s personal data? Is a data processing agreement necessary and, if so, concluded? Are mechanisms in place for regulatory changes and updates?
- Certifications and standards: does the AI system’s vendor provide certifications demonstrating regulatory compliance eg a C5 certificate for cloud providers processing health data in Germany or adherence with common standards eg ISO/IEC standards? Are there contractual obligations to maintain and update these certifications and standards?
- Compliance support: does the AI system’s vendor offer legal and technical support to ensure compliance? If so, are possible costs associated with these support services addressed? Are there agreed-upon procedures for audits and inspections?
Key considerations for contract clauses
Before drafting and negotiating contracts for the use of AI systems the risk classification under the AI Act and the roles of the contracting parties must be clear. Contractually, the provider’s obligations associated with high risk AI systems should be addressed and further defined. The training, validation and development of datasets used for the underlying algorithms becomes another integral part of AI related contracts, both in regard to requirements under the AI Act and the GDPR.