2 de 3

10 juillet 2023

Metaverse Juli 23 – 2 de 3 Publications

Human threats and artificial intelligence

Jo Joyce looks at the UK ICO's approach to security, safety and robustness.

En savoir plus
Auteur

Jo Joyce

Senior counsel

Read More

Talking about AI as a cyber security risk is a bit like talking about the internet as a cyber security risk. It cannot be denied that the very existence of vast quantities of data processed by machine learning applications creates vulnerabilities for the owners and users of those models, as well as opportunities for threat actors wishing to exploit them. At the same time AI can be a tool of attack, deployed to write code or support social engineering attempts on a scale that human threat actors can never hope to match. Since the AI genie is unlikely to go back into its bottle, regulators are having to adapt their methods and guidance to fit the use of – and threats from – a rapidly expanding range of technologies.

Risk by default if not design

In its March 2023 AI White Paper, 'A pro-innovation approach to AI regulation', the UK government set out its approach to regulating AI.  It envisages a sector-focused, risk-based approach with regulator guidance and sandboxes underpinned by five core principles. Top of the list is, unsurprisingly in the context of the (arguably) alarmist mood currently surrounding developments in AI, security, safety and robustness.

The UK's Information Commissioner's Office (the ICO) published guidance for users and creators of AI some years ago but has recently updated it following pressure from industry. The ICO cites some of the unique characteristics of AI as causing privacy compliance challenges, particularly in the context of data security. AI systems introduce complexity beyond that found in traditional IT systems;. complexity generally means more points of vulnerability and weak spots at risk of access from threat actors. The risk is growing too. Whereas in the past, users of AI have often been generators of AI, the recent boom in AI activity has significantly increased the number of businesses providing AI solutions to other organisations. 

When buying-in AI solutions, plugins and integrations, businesses are likely to be heavily reliant on third party code relationships and security safeguards built in by suppliers. AI systems do not operate in isolation but as part of a larger chain or web of components and processes. This makes it difficult to identify and manage security risks, such as monitoring points of attack or unexpected system activity, and may increase others, such as the risk of outages leading to a loss of access to personal data.

The level of risk to security posed by AI will depend on:

  • the way the technology is built and deployed (including the data it is trained on)
  • the complexity of the organisation deploying it (and its processor/s)
  • the strength and maturity of the existing cyber threat capabilities, and
  • the nature, scope, context and purposes of the processing of personal data by the AI system, and the risks posed to individuals as a result.

Security implications and the level of investment needed to ensure adequate security should be core aspects of any privacy impact assessment or AI risk profiling exercise.

Not all liabilities can be outsourced

The threat to data controllers caused by processor failings or attacks on large processors is a widely known phenomenon. Although data controllers are responsible for the security and management of the personal data under their control, in practice large IT service providers acting as data processors, will be best placed to adopt responsibility for meeting many of the accountability, transparency and security elements of the data controller's GDPR duties. This reliance grows greater with the rise of AI-fuelled service providers. Controllers, especially smaller ones, will not necessarily be able to describe accurately the nature of the processing undertaken by an AI service provider on their behalf.  This means, that the IT service provider will need to provide detailed guidance for impact assessment documentation, privacy notices and security protocols.

If a processor (or a controller) experiences a data security incident relating to data processed through an AI model or system, it is likely that the processor will have to take the lead in preparing reports to data regulators to explain the situation. It is crucial that organisations licensing-in AI tools or AI development services ensure that they have an adequate understanding of the technology and how it will process their data. Increasing numbers of organisations are embedding generative AI tools in their own environments and in doing so need to make serious decisions about how much data to expose to the model they are using and how to protect that data from both external and internal security threats. Since a loss of access to personal data can be as serious as a loss of control, if the compromise of AI tools could require immediate shut down of systems which automate the processing of personal data for essential business services (HR or payroll for example) the resultant loss of access could lead to regulatory action.

The ICO may adopt a pragmatic attitude to data controllers that have to rely on the expertise, security and accountability efforts of their much larger processors, but they are unlikely to show leniency to organisations that cannot adequately explain the processing undertaken on their behalf or the security measures adopted to keep data safe.

When it comes to investing in IT security, the ICO's AI Toolkit guidance notes that the position isn’t different for protecting AI systems and data sets. Organisations are expected to take a proportionate approach, taking into account the resources available to them, when setting their IT security budget. However, if businesses wish to increase their profits through the use of AI they should expect to reinvest a portion of that back into their security spending.

AI as a weapon

While the ICO is naturally concerned that the use of AI may create additional vulnerabilities for organisations – or exacerbate existing threats, it is also aware that legitimate businesses are not alone in seizing the opportunities offered by the AI boom, as cyber criminals across the globe have begun to seize their chance to automate their less than legitimate activities.  

A number of cyber security reports have identified the risk of generative AI being used to write malicious code that can precisely target specific organisations or systems. While this risk is real in the long term, it is not the most pressing threat to data security that AI poses. AI can be used to write code but, at present, it generally isn’t good enough to rival that of established threat actor hacker groups, though they may well use it as a labour-saving device, generating a first draft of a malware for further improvement by an expert.

While generative AI platforms, like ChatGPT are not yet great at writing code, they are better  at writing prose. The UK's National Cyber Security Centre (the NCSC) recently warned that large language models (like Chat GPT) are already being used by cyber criminals to draft convincing phishing emails as, with the right prompts, they can be asked to mimic an individual's writing style and can write convincing phishing emails, including emails in multiple languages. Threat actors with high technical capabilities but less impressive linguistic skills can use generative AI to conduct advanced social engineering scams in in the native language of their targets. Although many models, including Chat GPT have attempted to put in place safeguards to stop the model from responding to requests which are clearly designed to support illegal activities, so far this has been overcome by adjusting the style of prompt questions put to such models.

AI as a shield

For every news article about the many possibilities presented by AI there are several more identifying reasons to be fearful, but while AI may provide swift solutions to cyber criminals as much as any other user, it can also be used to defend increasingly complex and therefore vulnerable systems from attack. “Artificial intelligence allows defenders to scan networks more automatically and fend off attacks rather than doing it manually", David van Weel, NATO’s Assistant Secretary-General for Emerging Security Challenges recently commented. A Google search for AI cyber tools now reveals thousands of hits which, in most cases, did not exist a year ago.  When it comes to data security, the widespread adoption of AI certainly presents significant challenges both as a tool for threat actors and as a point of vulnerability for organisations but, as is so often the case in the digital world, AI brings with it both poison and cure.

Retour

Global Data Hub

Go to Global Data Hub main hub