A report by AFME in collaboration with Pricewaterhouse Coopers (PwC), titled Artificial Intelligence: Challenges and Opportunities for Compliance, found that 41% of survey respondents identified “explainability”, or lack of transparency, as their top risk in relation to the implementation of AI.
The majority of firms surveyed (65%) do not currently have discrete frameworks and policies in place for AI, although 82% of those firms had plans to change this in the future. Just 24% of survey respondents reported having a committee with a specific responsibility for the management of compliance risk in relation to AI.
“In their 2021 paper on AI in Business and Finance, the OECD estimated that global spend on AI within the financial services industry will reach $110 billion by 2024,” James Kemp, Managing Director, AFME, said. “For financial markets, AI offers opportunities in multiple areas from Front to Back Office. Areas such as trading, decision making, process re-engineering and data analysis have all received investment in the quest for better outcomes and greater efficiencies.”
Graphic: AFME
The report broadly concluded the following:
- Progress in the deployment of AI is more advanced in the 1st Line of Defence (1LoD), with a range of use cases including monitoring, trading decisions and strategy recommendations, surveillance, chatbots and fraud.
- While AI might not create completely new risks for organisations, there was recognition that it may amplify existing risks. These risks were related to ethical and performance concerns, such as explainability and data privacy. Transparency around the outcomes of AI systems is critical, and some firms reported simplifying their AI in order to meet internal and regulatory requirements. Considerations of data privacy and data leakage, particularly when using third party vendors, has limited some firms’ level of comfort in deploying AI.
- Management of AI risks is through existing governance and risk frameworks, with only few firms reporting having specific frameworks, processes and governance in place for AI. Similarly, the split of responsibilities for oversight between 2nd Line of Defence (2LoD) functions, and the interaction model between these functions, follows existing mandates and allocation of risk stripes.
- There is appetite within compliance functions to realise the potential of AI, and firms do not want to fall behind the curve in terms of evolving industry practice. Use cases for the deployment of AI in compliance to date have been seen mainly in activities related to horizon scanning, Anti-Money Laundering (AML)/Know Your Customer (KYC) and monitoring, with benefits focused on improved operational efficiency and additional capacity to focus on value-add activities.
- The mandate of the compliance function will not fundamentally change with advances in AI, as compliance will still require “human-led” subjective assessment. However, the composition of the compliance officer’s role and the associated skill set is expected to change over time. There is an expectation that compliance employees will need greater digital literacy in order to effectively execute their mandate, with advanced firms already upskilling staff in their understanding of technology.
- AI presents an opportunity to deliver a holistic and “dynamic” approach to compliance, with “real time” monitoring, advanced data analytics and the use of preventative, rather than detective, controls driving the embedding of an enhanced compliance culture. Firms see the overcoming of silos and strong collaboration across the organisation, with the compliance function as a central player, as key to achieving this.
- Regulation is at an early stage and, in the main, firms are using existing expectations to guide their use of AI. Public policy for AI can signal the direction of travel for future regulation (such as the risk-based approach of the EU AI Act) and so firms can start preparing by understanding key characteristics and putting in place proportionate governance. There is a preference for a principles-based framework that provides guidance but also encourages innovation.
- There are clear steps that compliance functions can take to enhance their AI capabilities. Focusing on robust governance, analysis of risk exposure and an end-to-end approach to compliance will support the increased adoption of AI across the business in a secure and sustainable way. It will also give firms a better sense of any gaps in compliance skillsets, metrics and monitoring.
AFME stated in a disclaimer that the report does not constitute professional or legal advice, but is for general guidance and interest purposes only.