“We cannot attribute agency to AI systems, which risks removing accountability for decision-making away from firms. We must leverage what we already have in terms of existing regulatory frameworks and adapt them as technologies change.” That was the conclusion reached by Financial Conduct Authority (FCA) Chief Data, Information and Intelligence Officer Jessica Rusu in a speech on regulation and risk management of AI at the City and Financial Global summit on November 9.
She stressed that “moving from fear to trust” of AI could be best achieved by deploying existing frameworks, and by ensuring the quality of data.
Rusu referenced the findings of a recent survey the FCA conducted with the Bank of England (BoE), that found strong uptake of machine learning applications right across financial services, but especially in the insurance sector. Risks were identified as coming from data bias and a lack of AI explainability.
Basic framework
The FCA sees the Senior Managers’ and Certification Regime (SMCR) as the basis for applying a regulatory framework for the use of AI in financial services, and Rusu referenced the current discussion paper tabled by the FCA and BoE which sought to focus thought on building that new framework and addressing any gaps.
She highlighted three practical challenges that any governance mechanism for AI would need to address;
- Responsibility – who monitors, controls and supervises the design, development, deployment and evaluation of AI models.
- Creating a framework for dealing with novel challenges, such as AI explainability.
- How effective governance contributes to the creation of a community of stakeholders with a shared skill set and technical understanding.
And she concluded by saying: “Above all, governance matters because it ensures that the responsibility is where it needs to be: with the firm!”
The full text of Rusu’s speech can be found on the FCA’s website.