Deepfakes and deception: The global rise of AI and the risks it poses

Widespread enhancements in AI, and growing sophistication of open source AI tools, are increasingly being used by fraudsters for illicit purposes.

Every day, the media in the UKHong Kong and around the world report examples of fraudsters using AI to attempt to deceive potential victims, including at the highest level of the most sophisticated global organizations.

This article considers the potential risks to be aware of, and provides tips to avoid falling victim to AI-generated frauds, from a cross-border perspective focusing on both the UK and Hong Kong.

Cyber-fraud is becoming more widespread

The volume and sophistication of cyber-fraud is rising. Home-working and the move to dispersed operations and systems, as well as the increasing acceptance of conducting business through instant messaging apps, have heightened business vulnerability to cyberattacks – which fraudsters are exploiting.

In particular, “authorized push payment” frauds are becoming increasingly prevalent, where the fraudster presents itself as a known counterparty of the victim (for example, a supplier, customer, or employee / executive in the victim’s own business) and convinces the victim to make a payment to a bank account controlled by the fraudster, rather than the legitimate counterparty.

Significantly, the security vulnerability that is exploited may be in the system of the victim’s counterparty rather than the victim itself. This can give rise to disputes as to who is responsible for losses that arise.

AI is making cyberattacks more sophisticated

Fraudsters are using AI to deceive victims by producing realistic and seemingly legitimate material to facilitate frauds. For example, AI tools are used to create false instructions by drafting convincing emails in the voice or tone of particular individuals. Deepfake AI tools can produce audio clips of individuals’ voices – which can be required to get past voice recognition systems – or even to carry out live phone conversations to deceive the victim as to the identity of the individual they are speaking to.

AI tools available to fraudsters are even sophisticated enough to generate fake representations of real people in video conferences, to convince victims of the authenticity of payment instructions.

And with ever developing personal computer power and increasing volumes of data about individuals being publicly available online (which AI can scrape from, for example, social media accounts and YouTube videos), fraudsters can send multiple emails to – or have multiple calls with – people all around the world in multiple different languages and over short periods of time.

Tips to avoid falling victim to AI generated frauds

With any fraud, prevention is always better than cure:

  1. Stay vigilant – look for basic factual errors (hallucinations) in otherwise persuasive text and parts of emails that don’t look quite right (for example, US spellings in emails from UK-based individuals). When on calls with familiar counterparties, listen out for intonation that appears computer generated. Mock phishing attacks can be used to ensure employees remain on high alert. If something doesn’t feel right, it often isn’t.
  2. Be suspicious – act cautiously upon receiving unusual requests or instructions, for example, (i) from channels that the parties have not used to communicate before, (ii) requests to change payment details, or (iii) payment requests for “secret” transactions. If there is a request to switch further communications to personal devices/accounts, this could also be a red flag.
  3. Verify before making payments or giving out personal details – contact the counterparty using the email / phone number known to you (not necessarily the one stated on the communication which asked for the payment or information), and ask for information that only the correspondent would know to check their identity.
  4. Ensure security systems are updated – vulnerabilities in security systems that can be targeted by fraudsters can result in disputes over who is liable for losses. Email screening tools can also help to filter out phishing attempts.
  5. Promote a cultural change – with the pace of development in AI technologies, deepfake materials will undoubtedly become more prevalent and difficult to spot. Encouraging an open culture within organisations, so employees feel comfortable in having direct conversations to verify an instruction in case of doubt – even with the CEO – will mitigate the risk of future loss.

Legal remedies are available

Although prevention is always better than cure, victims have a number of legal tools available to them around the world if, for example, money is paid out into a foreign bank account. The key is for potential victims to act quickly in the jurisdiction to which the payment is made. Having lawyers in the relevant jurisdiction who know the best way to prevent dissipation is critical. Any delay can result in a fraudster moving money into different jurisdictions or alternative assets (usually cryptocurrency) which can frustrate recovery efforts.

One of the key legal remedies that can be obtained in multiple jurisdictions around the world is a freezing injunction that will prevent fraudsters from dealing with or disposing of assets until a final judgment has been obtained. See our Global Freezing Order Guide for further information about the jurisdictions where freezing injunctions can be obtained.

“Norwich Pharmacal” orders can be made against innocent third parties that are likely to have relevant documents or information where wrongdoing has been discovered – for example, this could be used to require a bank to disclose the identity of a bank account holder. Search and seizure orders also allow a party to attend a defendant’s premise to search for and inspect documents, to ensure evidence is preserved.

In Hong Kong, it is important to report the fraud to the Police and the relevant bank(s) immediately. The Police and banks in Hong Kong have implemented a number of initiatives to combat fraud cases, including a 24/7 stop-payment mechanism to intercept payments made under deception. A number of similar legal remedies as in the UK are also available to victims in Hong Kong.

Contacts: Phillip RichardsonTim Browning or Michael Armstrong in London, or Mark Hughes or Wing Chan in Hong Kong. For further details on these legal remedies, and to see what additional legal remedies can be obtained in the UK, see our Civil Fraud Toolkit.