This session at the premium surveillance event for finance featured expert practitioner comment from: Steve Clark, Global Head of Trade Surveillance, Morgan Stanley; Amit Khare, Global Data Science Lead, Compliance, Société Géneralé; Darren Sirr, Global Head of Surveillance, BNY.
The session started with an audience survey question – what is the primary driver for banks investing in trade surveillance AI capability?
- To address the rising cost of technology – 0%.
- To enhance sophistication of monitoring and surveillance analytics – 100%.
- To enhance data capture completeness and archive retrieval – 0%.
The panel started with an introduction to current usage of AI and it seemed in most cases it is for detection especially in communications surveillance (via NLP) where it is deemed more effective than lexicon. One panellist said his firm is now starting to deploy machine-learning based models in equity trade surveillance and is about to turn off its traditional rules-based models.
The same firm has been testing its anomaly detection using outlier techniques across some of its trades. The changes in data availability and integrated systems have opened up new approaches and possibilities that had previously first been mooted as the holistic revolution about 10 years ago.
Dodd Frank compliance
Another on the panel commented that US regulators are moving from expecting risk-based monitoring to total coverage. His firm is using AI to do what was previously a very manual process to help Dodd Frank compliance but so far they have just been benchmarking the old approach with the AI results to assess which has the best outcome. There is now more budget available for use cases like this in surveillance (beyond pure MAR compliance).
An AI project needs a defined purpose that has a business problem to solve. The panelist drew attention to the amount of power needed to run 72 billion parameter LLMs and said it equated to the power needed for a three-bedroom house with a family of five for 120 years. He warned against creating a solution that is looking for a problem. Not all requirements demand an AI-based solution. He worries banks will lose sight of their purpose if they put AI first in their infrastructure.
Another stated that regulators will expect firms to be using AI if our peers are. It is incumbent on all in the industry to strive to do their best – for surveillance this means control of risk and improved detection. The UK FCA recently ran a tech sprint on challenger surveillance which was so promising. But the speaker predicted that this is the end of the line for traditional surveillance and is the opportunity for revolution, not evolution. It is possible to improve detection and reduce costs now.
Another panellist added that effective anomaly detection is a positive outcome from use of AI. He still doubts its completeness. He predicted that it will result in replacement of outsourced functions but not subject matter experts.
Equity trading
One of the panel said that their application of AI to equity trading was resulting in better detection with fewer false positives and less low quality alerts generally. He was honest about the cost and said his bank had invested a large amount of human and intellectual capital over two years.
The training commitment was also significant (and will need to be done annually). He said that from a risk acceptance perspective, they knew this approach would mean they would not detect some things the old method had – but the quid pro quo is that they are now finding sufficient new cases that more than compensate for that gap.
Another survey question was asked – in the overall trade surveillance workflow implemented in your institution, where is AI currently deployed?
- data ingestion – 5%;
- Alert generation – 27%;
- case management – 16%;
- rule engine parameterization and fine tuning – 7%;
- not used – 45%.
The session ended with a discussion about transparency and accountability in AI and the usual subjects of model risk management, comprehensive documentation, robust analysis for validation, and explanation of the design of the models and that they are working as intended. This process can be independently audited by the model risk group for the required check and challenge. This is a good discipline to apply.
Ethical questions came through and the recent EU AI Act does pose supervisory questions especially related to profiling. Containment, bias and the need to re-train are all considerations but the panel suggested as a final thought that we all need to try it, experiment, find the best use cases and keep testing.
Due to Chatham House restrictions, this summary does not attribute any comments to the individuals. It is also not a full transcription of the session, but contains the sense of it as interpreted and reported by the author.