The use of AI being made in financial services is evidently quite the secret. This is a place of hush that surrounds these commercial crown jewels: not being quite sure of the regulatory opinion on what established principles look like; not having clear information about what other organizations are doing; and possibly about the risk of reputational damage and regulatory sanctions if something were to go wrong.
We know from published surveys, including those undertaken by the regulators, that AI is out there and in use for cybersecurity purposes, in customer service functions, in sales and marketing, in research and development, and in other ‘back office’ functions, but there is more than a sense of apprehension around, and a lack of any real detail on, real-world commercial applications.
To get to the bottom of why this is and to find out more about what is really going on, we are reliant on anecdotal information, discussions with colleagues, conversations with industry bodies and the expert-led financial services news.
In the news earlier this month, there was a feature about AI concepts being explored by a financial services firm. Concepts are obviously quite different to “use cases” but this is where it all starts and this most recent news rings true with other news and information that is available.
What is going on in financial services firms seems to revolve around cautious testing and steady iteration, allowing issues that crop up to be ironed out. This testing seems to start with working out the problem that needs to be solved, and is followed by an initial test phase where no real data is used, then possibly by moving to a bigger and better test phase that might include some real data, and then possibly “going live” against some kind of implementation roadmap.
Key questions
One of the first questions that firms are addressing as they start their AI journeys is whether the problem that they need to solve is one that actually requires AI. AI could be a sledgehammer to crack a nut. Something simpler might work.
If AI is needed, what kind of AI is needed? Firms then have to think, and implement care and caution, around a multifaceted mesh of issues including organizational risk tolerance levels, their ability to maintain human influence over their chosen AI, accountability, AI-generated mistakes, their inability to predict the outcomes of their AI, other “black box” problems, skills gaps, and the highly sensitive nature of the source data.
A recent study about the use of AI in banks by Evident (an organization that tracks and produces analysis on how banks are using AI, and which will be expanding to the insurance sector later this year) suggests that the playing field is currently dominated by large multinational banks – led by US banks but with UK and EU banks gaining ground. This study focuses on a part of the financial services sector where investment resources are possibly at their highest, which is traditionally risk-averse, where controls, risk management, and governance frameworks, and consumer trust are foundational elements. It examines the lessons learned, the best practices adopted and the balancing act of innovation and responsible deployment.
From the Evident Responsible AI Report: “Banks are traditionally risk-averse organizations. They operate in a highly regulated and competitive industry, where establishing (and maintaining) consumer trust is paramount. As such, they maintain robust technology control, risk management, and governance frameworks – including model risk management, operational resilience programs, and regulatory compliance structures. These guardrails were built over many decades to help financial institutions adapt to the latest technological innovations and regulatory expectations. However, AI poses new risks.”
Big risks
The main risks cited for this sector, and there are a variety of them, include: data risks, ethical risks, stability risks, cyber risks, third party risks, sustainability risks, HR risks, employee understanding, and the ability of governance, risk, and compliance frameworks to evolve to shifting risks.
The approach taken by the surveyed banks is a responsible approach and starts with the “first principles.” These include establishing accountability, ensuring transparency, anticipating regulatory requirements, and upholding ethical commitments and operational standards. They then need to be translated, by each bank, into systems that provide structure and actionable controls for their own particular business environment.
The principles must be embedded into responsible practice, into the design and testing stages, into deployment, into monitoring, and throughout the entire lifecycle of any use case, in a way that is nimble and agile so that it can evolve against emergent risks and into the future.
This is not about transformations happening in silos. This is about driving the right culture and the right knowledge through entire organizations, embedding AI skills throughout organizational talent, developing skills dedicated to AI, fostering cross-functional collaborations, and empowering leaders with specific responsibility for AI. This is about evolving AI capability in environments where the impact on the end user and the trust that they will need to have in these systems in order to continue using them, and where the views of the regulators, is front of mind.
The work being done by the leading banks is likely to influence and develop the shape of the criteria applicable to other entities in the financial services sector, providing guidelines for them to mirror in their own evolution, and possibly to gain traction throughout the sector as good practice guidelines as the sector works out and establishes responsible ways to operationalize AI and become quicker, safer, more efficient, and eventually able to unlock competitive advantages.
Does the tortoise win the race?
Is this the art of going slow to go fast? Is this where early stage caution, given the highly regulated nature of the industry, the need for careful handling of foundational issues, the availability of sandbox and other testing facilities, will enable financial services firms to reach the pace required to remain viable in an AI driven world, but with the appropriate guardrails in place around the new and emergent risks? Once reached, this could be the place where the competitive advantages of AI for financial services could really be unlocked?
The financial services sector regulators and the UK government are focused on this space. We have just finalized our response to Treasury in relation to their Call for Evidence and will be monitoring closely for output in relation to that as well as from other regulatory initiatives that took place earlier this year and which seek to unlock the advantages of AI for the financial services sector.
Author: Kerry Berchem joined Burges Salmon in 2023 and is the Practice Development lawyer for the firm’s Financial Services team.
Contacts: Tom Whittaker, Martin Cook.
