26. Februar 2025
Fraud, Corporate Crime & Investigations – 2 von 3 Insights
The face of fraud has evolved dramatically over the past few years. Email scams, hacking and website impersonation are commonplace. But the evolution of Generative AI, large language models and an increasingly virtual world mean that fraud now has technological backing in a way we haven’t seen before. Some of the tell-tale signs of fraud we have been accustomed to – such as poorly worded and grammatically incorrect email approaches – have given way to word perfect messages, highly convincing deepfakes and sophisticated ransomware attacks. The advancements in technology mean that very little skill is required on the part of fraudsters to execute such scams and it has become cheaper and easier to scale the same fraud model to reach a larger number of possible targets.
In this article we explore how AI is being used in fraud, the impact this is likely to have on dispute resolution and what can be done by individuals, companies and governments to address the threat of AI-enabled used fraud.
Research conducted by PricewaterhouseCoopers LLP and Stop Scams UK in December 2023 identified that AI is being used in the following ways in connection with fraud:
While AI can be exploited for fraudulent behaviour, it also serves as a powerful tool for detecting fraud. For instance, AI systems can analyse written or spoken communication to identify unusual patterns or irregularities, effectively flagging potential scams before they escalate. This dual capability emphasizes the importance of leveraging AI technology responsibly in both prevention and detection efforts.
We are already seeing some sophisticated instances of AI enabled fraud. We recently acted for a client in Slovakia in respect of what is often referred to as an "authorised push payment scam" or "CXO fraud" where the fraudsters used deep fake technology to impersonate one of the company's non-domestic executives and instructed the client's accountant via telephone call to urgently transfer a multiple million dollar sum to the fraudsters' account in Hong Kong.
The client discovered that the transfer was fraudulent three hours after the transfer was submitted via the online banking system. Attempts were made to stop the transfer but they were unsuccessful. On receipt of the funds, the fraudsters distributed the funds to multiple accounts across different banks. We successfully assisted the client in obtaining disclosure orders to trace the funds and freezing orders to prevent further distribution. The client was ultimately successful in recovering a large proportion of the stolen funds.
The accessibility and sophistication of deep fake technology and voice cloning means that it is no longer possible to rely on the fact that you see a familiar face or hear a familiar voice – most people have some presence online which provides access to their photo and / or their voice and it only takes a small set of that data to create a convincing deepfake. Additional safeguards are therefore needed, particularly when issuing payment instructions. Businesses should review their payment approval processes to address the evolving risks, ensure their staff are properly trained on those processes and should discuss with their bank and payment service providers to ascertain what fraud prevention technologies can be deployed as an additional safeguard.
We expect the use of AI to commit fraud will impact the types of disputes that individuals and companies find themselves in, as well as the way in which disputes are resolved by courts and arbitral bodies.
As to the types of disputes:
Tackling fraud has high on the legislative agenda for a number of years in the UK and we expect that to continue, with tech enabled fraud becoming an increasingly prominent feature. Some of the recent measures introduced in the UK to tackle fraud include:
In November last year, the UK government entered into a voluntary agreement with the technology sector to reduce fraud on their platforms and services. Its signatories include Amazon, Google, Facebook, Microsoft and others. The key actions required by companies include deploying measures to detect and block fraudulent material. A number of technology companies have already developed AI detection software and we expect as the technology being used to commit fraud continues to develop, technology companies will need to continue to invest in the development of prevention measures. However, given the voluntary nature of the charter, its scope is limited.
Ofcom is the regulator responsible for overseeing the implementation of the Online Safety Act which introduces a number of new offences for companies with an online presence for failing to prevent online harm, which includes fraud. The FCA recently highlighted the importance of the Act in fighting fraud on tech platforms, particularly APP fraud (see here).
Ofcom is implementing a number of Codes of Practice in support of the OSA and the first were published at the end of 2025. This included guidance on risk assessments to protect people from illegal harms online including fraud and financial offences.
The Economic Crime and Corporate Transparency Act 2023 ("ECCTA") creates a new corporate criminal offence of failure to prevent fraud, which applies to large organisations and comes into force on 1 September 2025. The new offence is designed to hold relevant organisations to account if they benefit from fraud committed by their employees or agents and the organisation did not have reasonable fraud prevention procedures in place. Guidance from the government on the ECCTA and the reasonable procedures defence was published on 6 November 2024.
Given the increased use of technology in the way fraud is being committed, if an organisation is aware that there is a risk that of AI technology being used by its employees or agents to commit fraud and does not take measures to try and prevent it, then that organisation may find itself liable in the event that fraud is committed.
For more on the ECCTA and how we can help, see here.
The EU is leading legislative efforts when it comes to AI technology, resulting in the adoption of the first comprehensive AI regulation. The AI Act establishes safety requirements that companies must meet before placing AI products on the EU market.
The Act outright bans certain AI applications deemed to pose unacceptable risks, such as manipulative AI technologies that could exploit individuals' vulnerabilities. By prohibiting these harmful applications, the AI Act aims to protect citizens from potential fraud and abuse. However, deepfake tools will remain classified as “limited risk” AI systems thus requiring user only clearly disclose that such a content is AI-generated.
AI systems falling under the high-risk category must adhere to strict compliance requirements, including risk management, data governance, and transparency obligations. However, the AI act exempts AI systems used for the purpose of detecting financial fraud from the high-risk category, simplifying their application and use.
Along with the Digital Markets Act, the DSA is a major pillar of the EU's "fit for the Digital Age" initiative. It is premised on the notion that every activity deemed illegal offline should be illegal online. The Act mandates that platforms establish easy-to-use systems for users to report illegal content, including fraudulent activities. This facilitates quicker identification and removal of scams, enhancing user safety.
Both the AI Act and the DSA enhance online transparency, making it more challenging for fraud attempts to succeed and easier for them to be detected. For more information on advertising requirements under the DSA and what they mean for your business, see here.
It is essential that businesses, individuals and legislators keep pace with the methods used by fraudsters, and identify opportunities to use AI and new technologies as fraud protection tools, as traditional methods of detecting fraud are becoming obsolete.
"Risk assessments" are increasingly becoming a mandatory feature of the regulatory landscape for businesses and it is therefore essential that businesses understand the laws and regulations that apply to them to ensure they are compliant, but also to ensure that they have appropriate fraud prevention and protection measures in place which are reviewed regularly to address advancements in technology.
26. February 2025
von mehreren Autoren
25. March 2025
von mehreren Autoren
von Emma Allen und Georgina Jones