Last week, FINRA issued Regulatory Notice 24-09, reminding member firms that its rules and the securities laws apply to the use of artificial intelligence (AI), including large language models (LLMs) and other generative AI technology, just as they apply when member firms use any other technology.
Although the Notice does not create new legal or regulatory requirements or new interpretations of existing requirements, it is a reminder to member firms that use of the shiny new toys falls under existing federal securities laws and regulations. And it is a reminder the financial watchdogs are … watching.
Rapid innovation and attendant risk
FINRA starts out by acknowledging the rapid and continuing innovation in this area.
The regulator says that the use of these technologies is not new and nothing in the rules prevents firms from continuing to innovate and adopt them in their operations. It also emphasizes its understanding that the use of AI tools presents opportunities for both investors and firms.
The notice points out a number of areas in which AI and LLM technologies are being experimented with and used by member firms including:
- analyzing and synthesizing financial and market date;
- summarizing large and complex documents;
- powering investor education resources;
- querying firm policies, procedures or forms;
- generating summaries of firm research reports;
- obtaining issuer-specific information from SEC filings and earning call transcripts; and
- aiding surveillance efforts by generating summary reports of potential evidence of malfeasance including market abuse or insider trading
However, it also draws attention to the risks associated with AI and LLM technologies, including:
- accuracy problems;
- privacy bias;
- intellectual property infringement; and
- exploitation by threat actors.
TPRM (but of course)
Ensuring that these risks are adequately addressed or mitigated is essential if the firm developing proprietary AI/LLM tools as well as in instances where these technologies are harnessed by leveraging third-party technologies.
This is an important point, as it underscores a consistent message from regulators about the use of third-party tools and features: It is ultimately the firm itself that bears responsibility for managing the risks stemming from them. And the firm itself must ensure it continues to remain compliant with all applicable laws and rules when employing them.
Practical guidance for firms
In the notice, FINRA draws attention to specific guidance already issued on AI usage including in connection with content standards for communications with the public.
It also points to its 2024 annual regulatory oversight report, which highlights AI as an emerging risk and notes it could “implicate virtually every aspect of a member firm’s regulatory obligations,” urging firms to consider paying particular focus to the following areas when considering its use:
- anti money laundering;
- books and records;
- business continuity planning;
- communications with the public;
- customer information protection;
- cybersecurity;
- model risk management;
- testing;
- governance;
- data integrity;
- explainability;
- research;
- SEC’s Regulation Best Interest;
- supervision;
- vendor management.
The notice indicates that FINRA is ready to engage with firms on the supervisory and compliance implications of AI adoption. And the self-regulatory organization further suggests that firms do the following to foster such a dialogue:
- Seek interpretive guidance from FINRA.
- Engage in discussions with their Risk Monitoring Analyst.
- Offer general feedback on potential rule modernization.
- Contact the Office of General Counsel for policy and rules-related discussions.
- Contact FINRA’s Office of Regulatory Economics and Market Analysis/Office of Financial Innovation for AI engagement.
GRIP Comment
For compliance teams, the key theme to highlight to senior leadership and the board is that AI innovation is invited, and maybe even expected, but the encouragement to develop the tools does not translate into a free pass to let firms off the hook should things go wrong.
It’s not a repercussion-free technology sandbox.
FINRA’s point around exposure to features being introduced by vendors, in particular those supplying technology that directly enables compliance operations, is a particularly relevant one.
A few key questions to ask the leadership within your organization: Do we understand all of our potential AI risk exposure when adopting or adapting new systems or system features being provided by our third-party suppliers?
Have we tested these scenarios in action, do we have back-up plans when technology does not work as intended (plans for the tech and the vendor itself), and do we have the internal skill sets needed to have appropriate oversight and informed judgment over the tool’s performance and outputs received?