While still widely considered an essential measure for financial services to adopt, biometrics are simultaneously being identified as one of the biggest threats the sector faces.
The head of the UK’s National Cybsersecurity Center. Michael Marcotte, recently told The Fintech Times that “the rapid proliferation of deepfakes online, means that banks are at the wrong end of an acute digital identification and security crisis – and their current practices, protections, and technologies are miles behind the curve.”
He went on to describe banks’ reliance on ID card, face and address verification as looking “neolithic against deepfakes and AI-powered Identification fraud.”
Talk of the security threat posed by AI was a major theme at the recent RSA conference in San Francisco, where businesses and academics gathered to discuss security issues.
AI-powered fraud
AI-powered identity fraud was found to be the most common type of identity fraud in Sumsub’s third annual Identity Fraud Report. The report draws on data from millions of verification checks across 28 industries and over two million fraud cases.
The research found a 10x increase in deepfakes detected globally across all industries, with the North America and APAC regions experiencing surges in cases of 1,740% and 1,530% respectively.
Common fraud techniques include money muling, where innocent individuals are duped into transferring illegally obtained funds, and forced verification, where individuals are manipulated into going through KYC processes for the benefit of fraudsters.
$212 billion problem
The US Treasury warned recently about the growing threat of suspicious activity relating to identity, with a report highlighting 1.6 million suspicious activity reports in 2021, affecting about $212 billion-worth of transaction.
In Europe, an EMEA report flagged up that social engineering scams, particularly voice scams, are prevalent in Europe – although UK banks reported a 25% reduction in this type of fraud in 2023.