Voice comms, ecomms and AI received plenty of focus at the recent XLoD conference in New York, with engaging discussions on best practice in using all three. How we handle data – that dreaded four-letter word – was also to the fore. Our second report from the conference, which focuses on the three lines of defense at organizations and surveillance techniques, contains some pertinent best practice pointers.
To begin, here’s an at-a-glance guide to some of the advice and observations that caught our attention. And our first report is still available to read.
Best practice pointers
- An LLM can spot someone saying “Don, I can give you an offer you cannot refuse” – and your archiving tool might see it as a mere entertainment reference from the movie The Godfather and not flag it.
- Know who does what when it comes to monitoring and archiving communications. At large organizations this can mean a lot of teams – from surveillance to data management to legal and compliance – are establishing their roles over creating lexicons, capturing and analyzing data.
- Financial crime often arises out of an inability to collect complete data through effective transaction monitoring and suspicious activity reporting. Effectively doing it requires effective technology, skilled people to conduct a detailed review and the threat of tangible disciplinary measures.
- Capturing data feeds incompletely is not always a surveillance failure or not just a surveillance failure – a wider probe into what processes are not working or workarounds people have exploited needs to be examined.
- Someone from a large bank mentioned that some of the bank’s technology was a bit old and that means more attention to outputs needed to happen. The same candid speaker also said that the bank is still missing some things in their first rounds – but learning from the misses to fine tune processes.
- Remember the retention time periods under regulatory rules may be different for video and for transcripts of video meetings.
- Businesses seem to be enhancing their technology resources and using more consultants to future-proof their risk of capturing communications more effectively, with the exception of social media. Attendees felt social media was lagging behind in terms of capture, with only 33% of the audience saying they retained LinkedIn messages.
- “Data is a four-letter word,” one panelist said. Being able to evidence where it is and access it upon demand is essential – but a big challenge. Data clarity depends on the three lines of defense having clear expectations and demarcated responsibilities. The ecomms enforcement actions are adding better governance all around to the management of data, some speakers emphasized.
- The JPMorgan trade surveillance case offers a cautionary tale of how this process is not a check-the-box type of exercise. Assess whether there is a need to review venue assessment, onboarding and ongoing management at your firms. Along these lines, the event speakers said a big challenge is figuring out whether a comprehensive integration of comms and trade surveillance data is still a realistic possibility.
AI considerations
Large language models (LLMs) can fail, and as news reports point out, they fail while sounding so convinced of their output. Human must stay in the loop – they have an active role to play in verification and establishing causal links between inputs and outputs.
As firms use AI to streamline the review process and reduce false positives – but also identify more bad actors – the tools will make these businesses far more efficient, speakers pointed out. Voice surveillance in particular is one of the more promising use cases for AI, most of the industry speakers felt.
AI’s ability to understand and alert on multiple languages in particular is quite promising. Over time, they have proven able to do sentiment analysis by assessing the tone of text as positive, neutral or negative.
Train an LLM on your particular documents, so it gains an understanding of the terms you use as an organization, such as specialized vocabulary and acronyms.
What can be costly but essential is fine-tuning AI tools to continue the training process for a pre-trained model with additional content relevant to a specific domain or task. It’s drastically cheaper and faster than training an LLM from scratch, but still pricey.
Speaking of price, considering that the initial training of GPT-4 is estimated to have cost more than $100m, custom-developing an LLM is just not feasible for most organization.
One of the biggest challenges, besides price, in the LLM arena is the fact that the quality of a response is highly dependent on prompt phrasing.
Small changes to a prompt that would not make a difference to a human in terms of the use of certain words over others could make a huge difference to an LLM. It’s important to remember that you would need to train an LLM on your particular documents for it to have an understanding of terms you use as an organization, such as specialized vocabulary and acronyms.
- ICYMI: Global Relay has compiled industry responses to regulatory action across recordkeeping, surveillance, and communication compliance to show how executives are managing business communication, and their attitudes to emerging risk, from AI to social media. Many of the concerns noted above are reflected and analyzed there.