1 / 6

2024年4月18日

Cyber security – weathering the cyber storms – 1 / 6 观点

AI – the threats it poses to reputation, privacy and cyber security, and some practical solutions to combating those threats

Disclaimer: This article was written with the help of AI but also by Michael Yates, Andi Terziu and Alisha Persaud.

更多
作者

Michael Yates

合伙人

Read More

Andi Terziu

高级律师

Read More

We asked our own internal AI tool, Litium, to list the threats it perceives AI poses to reputation, privacy, and cyber security. The results were impressive, so we've included some of the suggestions in this article. It seemed Litium had done half the job. However, before we could despair at the thought of our impending replacement by an AI tool, we tested Litum's capabilities further. We asked it to provide a list of solutions to the perceived threats. Although Litium did provide some sensible suggestions, we thought that, as humans with extensive experience in this area, we could come up with better practical tips and finish what Litum started. 

Threats to reputation

AI can be harnessed to conduct a variety of attacks aimed at damaging the reputation of individuals and organisations. This usually takes the form of the creation/manipulation and quick mass dissemination of false information.

Deepfakes are a prime example. These are sophisticated AI-generated and disseminated videos or audio recordings that convincingly portray people in situations they were never actually in, and with potential to cause them significant reputational harm.

AI can be used to orchestrate disinformation campaigns targeted at groups and/or more focused attacks on individuals or companies by producing and disseminating false and/or defamatory information on a large scale across various platforms, including social media. These campaigns can quickly erode public trust and credibility. The spread of the false and/or defamatory information is likely to cause harm to the target's reputation.

AI also enables the automation of social engineering attacks. Using data analysis, AI systems can craft highly personalised messages that manipulate perceptions and spread harmful narratives, effectively targeting specific individuals or groups. Additionally, fake reviews created by AI algorithms can sway consumer opinion and damage business reputation on a wide scale.

To compile any intended reputational harm, AI-powered SEO techniques can be used for search engine manipulation, to ensure that negative content appears more prominently in search results related to the target person or company.

Cyber security threats such as phishing attacks are another area where AI's capabilities could be misused to cause reputational damage. By sending out fraudulent communications that convincingly mimic legitimate sources, these AI-driven phishing attempts pose serious risks to organisations' security if they lead to, for example, high-profile breaches or illegal online banking transfers.

Threats to privacy

AI significantly augments the intricacies of privacy incursions by automating complex tasks and data analysis. AI-facilitated cyber attacks can result in exposure of sensitive information and personal data, while de-anonymisation techniques may reveal individuals' identities within seemingly anonymous datasets. Mass surveillance, enhanced by AI's capability in facial recognition and behavioural tracking, raises profound privacy concerns due to its potential for abuse if privacy laws are either not in place or are not followed.

Inference attacks utilising AI can deduce private attributes from public data, leading to privacy breaches of which the relevant individuals may be entirely unaware. Moreover, internet of things (IoT) devices like smart speakers or video recording devices could be hijacked to record private conversations or film private activities. Additionally, AI-driven personalisation strategies might utilise an individual's data for manipulative purposes, crossing ethical as well as legal boundaries.

Automated decision-making systems lacking accountability can also infringe privacy rights if they operate without proper oversight or data privacy compliance.

Threats to cyber security

AI can significantly enhance the capabilities of cyber attackers, enabling them to conduct sophisticated and automated attacks. As mentioned, AI-driven phishing efforts can create highly personalised email campaigns that mimic legitimate sources to trick recipients into revealing sensitive information. Social engineering schemes powered by AI can analyse large datasets to target victims with customised deceitful messages. Rapid password guessing becomes feasible as AI algorithms learn from data breach patterns to crack credentials efficiently.

Malware can be evolved using AI, allowing it to adapt its behaviour and evade detection in different environments. Vulnerability scanning is accelerated as AI systems identify security weaknesses within networks or software much faster than manual methods. Botnets managed by AI can execute distributed denial-of-service (DoS) attacks more effectively, disrupting services by overwhelming targets with traffic from various sources.

Stealthy network reconnaissance is another area where AI excels, gathering intelligence discreetly and identifying vulnerable points without alerting the targets. Additionally, machine learning models are susceptible to data poisoning and adversarial attacks; manipulated training data or crafted inputs can lead these models to make incorrect predictions or classifications, undermining their integrity and effectiveness.

So what can you do? Practical steps to mitigate threats

Mitigating Reputational threats

If you're worried about being the subject of AI-assisted reputational attacks, it's crucial that you act quickly to mitigate any damage caused by publication of information and keep track of what's been published. 

It is advisable to work together with online digital reputation advisers who utilise sophisticated automated tools (sometimes powered by AI) that instantly pick up any negative information published about you anywhere online. Getting live updates will ensure you can act quickly to mitigate any damage. 

If you spot negative false information, it's a good idea to instruct specialist media lawyers and/or PR advisers who can assist in keeping any damage to a minimum. 

With the help of your lawyers, it might be possible to get false information in social media posts or (fake) reviews taken down, especially where they are contrary to the terms and conditions of the social media platform or website on which they're published. It might also be possible to show via technical means that a relevant video/voice recording is a deepfake which will help with any take down attempt.

Sometimes false information may spread on social media more quickly than it can be taken down. The best avenue in such situations may be to stop the repetition of such information by the mainstream media. If any mainstream media approach you pre-publication, instruct your lawyers early so they can engage with a view to stopping repetition of the false information. If false information is re-published, get advice from your lawyers on possible action to get that false information taken down – this may involve taking either a legal and/or regulatory complaint route against the media organisation in question.  

In some circumstances, you may want to publish a public statement setting out a rebuttal to any false information that has been spread. This can assist in 'setting the record straight' but is not always the recommended approach so you should consult PR advisers and/or your lawyers who will best be able to advise you if they are instructed quickly and kept in the loop. You may also wish to deploy such statements when you are approached by the media for comment.

With the help of online forensic experts, you might be able to trace the origin of false information and identify the individuals or organisations behind it. If you do, you can take legal advice on whether to take legal action against them.

Mitigating threats to privacy

Individuals and organisations need to put in place security measures to help prevent IT systems or IoT devices being compromised.

If you've been the subject of an AI-assisted breach, seek advice over available legal claims against the individuals or organisations involved. If successful, remedies obtained from the courts may include an order for the deletion of surveillance recordings.

If private information and/or data have been obtained as a result of an AI-assisted security compromise, you need to act quickly to ensure it is not published on the internet. Take urgent legal advice from specialist lawyers. Recommended actions may include writing to website hosting providers or storage sites onto which the information has been published and/or stored to request take-down and, if that is unsuccessful, making an application for a court order for the removal/deletion of the information. Any applications to the court may be made under anonymity so as to further protect the privacy of the organisation or individuals concerned.

Mitigating threats to cyber security

There are a number of technical measures and practical steps you can take to try to mitigate any potential cyber security threats. These may include:

  • Complex passwords and two-factor authentication - passwords are one of the easiest targets for AI algorithms to decipher. Best practice is to have one long complex password that does not require the user to change it often. Believe it or not, this is more secure than having a password that is required to be changed every four to six months. Employees often see changing their password frequently as annoying and it pushes them to choose weak passwords for memory's sake. However, this makes it easier for AI algorithms to predict. Additionally, you should implement two-factor authentication log-in processes across all devices (ie computers and phones) for added protection.
  • Objective risk assessments - risk assessments should not be a tick-box exercise. A targeted risk assessment is preferable as it allows you to think carefully about what your organisation wants to protect most and then tailor your security measures and software accordingly.
  • Credential restrictions - take advantage of your security software provider's feature to restrict the use of employee credentials. Certain providers, such as Microsoft, offer this as part of their security package and enable organisations to implement rules around how their employees use their organisation's credentials. For example, you can restrict an employee from using their employee email address on certain websites or to sign up to certain events. This is an effective way to protect employee work credentials being used on less secure websites, potentially exposing you to a cyber attack.
  • Offboarding - it's important to offboard employees (or contractors where relevant) from all systems when they leave. This protects the organisation from hackers who look to target dormant accounts to access confidential information. Offboarding software is available to automate this process.
  • Data Loss Prevention (DLP) - DLP refers to systems and processes organisations should use to help ensure sensitive data does not get lost, misused, or accessed by unauthorised users. DLP helps protect against both insider threats and external attacks by detecting potential breaches or policy violations and blocking access before sensitive information can be exposed. By setting up a robust DLP strategy, you can help secure your intellectual property and protect personal data.
  • Employee training and awareness programs - educate employees about different cyber threats – how to spot them and what to do to help prevent them or mitigate fallout if they occur.
  • Incident response planning - prepare an incident response plan to enable swift action if your systems suffer an attack – whether or not that results in the exfiltration of data.
  • Cyber insurance – even if you have robust software in place to minimise exposure to AI-enabled threats, not all cyber attacks can be prevented, particularly as AI becomes increasingly sophisticated. This makes it important to have appropriate cyber insurance in place. This type of policy typically covers expenses related to data breaches, malicious software attacks, business interruption due to cyber security incidents, and even ransom payments demanded by cybercriminals.

One more thing

We can help both with your efforts to prevent breaches and with what to do if they happen. We're used to working with PRs, forensic data specialists, IT teams and insurers, across the full spectrum of risks and damage associated with data breaches, including AI-enabled ones. We've also put together a selection of services to help organisations get 'breach ready'. This includes carrying out an incident preparedness audit, providing recommendations on how to improve policies or safeguards (where appropriate) and carrying out a breach simulation exercise to test your organisations response to an incident. We can also review your insurance position, contractual rights with third parties you've hired to help you with cyber security and provide training sessions on how to protect your reputation during a crisis. If you would like to hear more about this series of training sessions, please get in touch with us!

执业领域和服务团队 数据和网络Artificial intelligence

返回

Global Data Hub

Go to Global Data Hub main hub