At the FINRA annual conference earlier this month in Washington, DC, regulators and industry executives discussed how they are embracing technology – data analytical tools and Gen AI, in particular. And they noted the compliance challenges facing their businesses in using them.
Jim Reese, Senior Vice President of Data & Analytics in the Member Supervision Division at FINRA, said consolidated audit trail data was being used by the agency’s data analytic tools to examine time stamps, recordkeeping adherence, accuracy and completeness of data, and the trigger of such things as highly leveraged exchange-traded products to retail clients.
“Our algorithms have spotted when brokers offer leveraged products or ones designed to perform inversely to the index or benchmark they track (or both), which are growing in popularity. While such products may be useful in some sophisticated trading strategies, they are highly complex financial instruments that are not suitable for retail investors who plan to hold them for longer than one trading session, generally,” he said.
(FINRA Regulatory Notice 09-31 spells this out.)
STAAT at UBS
Joe Codeira, Chief Data and Analytics Officer at UBS Wealth Management, said there are about 6,000 advisers at his firm, and they use analytics as one way to help ensure they are abiding by UBS policies and procedures. They employ a Smart Technologies and Advanced Analytics Team (STAAT) that aims to provide a better client experience and better client outcomes by helping advisers unearth the most salient insights about their clients and what decisions could help them achieve their goals.
“We took a model designed for top-tier clients only and have refined it to service all wealth-management clients, so everyone gets that level of service. The STAAT insights help us inform the advisers of as much data as they can get throughout the client lifecycle,” he said.
“Disclose when customers are interacting with an AI system and give them the option to opt out of such interaction.”
Lisa Roth, President of Monahan and Roth LLC
Codeira also says the firm uses this data to create the design framework for future investment vehicles, knowing what is best suits client demands over time.
Data requests from FINRA
Reese said he hoped firms think about data governance and who has ownership – and know how to treat a data request from FINRA. He had some tips for firms on the latter point.
He said they should do the following:
- Talk to FINRA about any problems you have collecting the data.
- Look at the guidance documents that often go with a production request from the agency to avoid what happens a bit too often – FINRA needing to send a few requests because it gets the wrong data.
- Remember FINRA seeks accurate, standardized data and it expects the business to provide the training to staff to be able to interpret data so everyone understands at least the same basic lingo.
So, you want to use LLMs?
Let’s start with a quick primer. Large language models (LLMs) are generative models of AI that are neural networks that predict the next word in a sequence. Longer stretches of text can be generated by adding the last word predicted to the input sequence.
LLMs are trained on large bodies of text curated from public and private sources – from Wikipedia to public documents and news articles to open-source books.
During training, LLMs learn how far back in text they need to look and how to weight the importance of words to best predict the next word. It is sometimes confused with software – but that operates on predefined and deterministic instructions and rules to perform tasks as programmed. LLMs learn and mimic human reasoning to make estimations or predictions and iteratively learn from patterns spotted in more data to improve outputs over time.
Marco Enriquez, Principal Data Scientist, Division of Economic and Risk Analysis (DERA) at the SEC, cited a few examples of what compliance considerations come along with using LLMs for any of the following activities:
- Third-party management: First, see FINRA Notice 21-29, Supervision Obligations related to Outsourcing to Third-Party Vendors. Then think about what access controls your vendors have, how their use of AI could affect your business, and whether LLMs are being used by the vendor to satisfy critical functions related to data integrity or books and records confirmations.
- Your books and records: Is the model generating records for your firm? How are they archived and maintained?
- Advertising, communications and messaging: Are the tools being used to communicate with the public and is someone qualified to review the work doing so? (Does it need to be someone with Series 24 approval capabilities?)
- Cybersecurity and privacy: Do your records remain secure and confidential with no unauthorized access? Could a firm’s data potentially leak when using an AI model, based on consistent testing of the tool?
- Reg BI: Recommendations made by an AI tool constitute a recommendation by an unlicensed party and need to be reviewed by a human with the requisite training.
- AI washing safeguards: Is your firm stating or advertising that AI is being used? Are you taking extra steps to understand where, why and how AI is being actually deployed?
FAQs on AI and sweep exams
Scott Gilbert, Vice President, Risk Monitoring (Large Diversified) in the Member Supervision Division at FINRA, reminded the audience that two new FAQs were released by the self-regulatory organization on May 10, both on advertising regulation tied to supervising chatbot communications and AI-created communications.
With regard to chatbots in the first FAQ, FINRA advises that “depending on the nature and number of persons receiving the chatbot communications, they may be subject to FINRA communications rules as correspondence, retail communications, or institutional communications,” and thereby be subject to applicable FINRA rules.
The second FAQ concerns whether a firm is responsible for the content of communications created using AI technology. FINRA confirms that “regardless of whether they are generated by a human or AI technology,” firms are responsible for their communications.
That means applying FINRA and SEC recordkeeping requirements and the content standards in FINRA Rules 2201 and 2220 that require communications to be fair and balanced and not inclusive of false, misleading, promissory or exaggerated statements or claims.
For its part, Enriquez reminds us, the SEC has (in August 2023) initiated a broad examination sweep of investment advisers premised on their AI use and (in March 2024) charged two investment advisers with making false and misleading statements about their use of AI.
Keep a human in the loop
Lisa Roth, President of Monahan and Roth LLC, reminded the audience of keeping a licensed human involved (“keep a human in the loop,” as Wharton Professor Ethan Mollick would say) and verifying all data outputs. “The chatbot collects the data — and the human with judgment reviews it.”
She noted the importance of not relying unduly on the AI technology and having a backup plan if the AI fails.
“Disclose when customers are interacting with an AI system and give them the option to opt out of such interaction,” she advised.
When I caught up with her the next day and asked her about her fellow panelists having noted the dearth of regulations specific to LLMs and mentioning we likely need some, Roth said (as she did in the panel session) she thinks existing rules from both agencies are sufficient, explaining her reasoning.
“They could ensure proven data protection processes are followed, transparency and accountability through disclosures and regulatory reporting happens, and that there is mitigation of fraud and bias through regular monitoring and testing,” she said.
She advocates for top business leaders set up clear parameters for AI usage in their workplaces as a governance priority. And to be mindful of the fact that even if their business is not using it, many employees likely are, so having policies, procedures and oversight mechanisms right now is important.