The SEC is asking investment advisers how they use and oversee artificial intelligence, the WSJ reports, with its exams division sending requests for information on AI-related topics to several of them as part of a process known as a sweep.
This all comes as SEC Chair Gary Gensler, the White House, and other federal agencies continue to express concerns about the technology. The mounting of a sweep doesn’t mean the agency suspects any misconduct on the part of the firms being questioned.
The request for information
The SEC’s requests in the sweep letter, which cover 26 broad topics, reflect known agency concerns. The letter, for example, demands that firms turn over documents on the management of potential AI-linked conflicts of interest. It also asks firms to provide information on their contingency plans for system failure, reports on AI systems causing regulatory or legal issues, and recent examples of advertising that mentioned AI.
The agency wants details on topics including AI-related marketing documents, algorithmic models used to manage client portfolios, third-party providers and compliance training, according to one such letter obtained by Vigilant Compliance, a regulatory compliance consulting firm.
Karen Barr, the head of the Investment Advisers Association, confirmed to the WSJ that her trade group has heard about the SEC outreach to advisers on the use and governance of AI. The agency’s exercise could be “extremely helpful as the commission considers policy issues relating to these emerging technologies,” she said.
An SEC spokesman said the agency’s examinations aren’t public and would not confirm or deny their existence to the WSJ.
Gensler’s sentiments
This news comes just days after SEC Chair Gary Gensler gave a speech in which he warned businesses not to make false claims about their AI capabilities. Gensler referred to this practice as “AI washing,” comparing it to “greenwashing,” the term for when companies exaggerate their environmental records. “Don’t do it,” said Gensler. “One shouldn’t greenwash, and one shouldn’t AI wash.”
(To be fair, Gensler also stated in that speech that AI holds promise in helping the SEC staff with market surveillance and disclosure reviews, plus exams and enforcement activities.)
And in another speech earlier this year, Gensler said recent advances in generative AI increase the possibility of institutions relying on the same subset of inaccurate or irrelevant information, and creating the risk of something like the 2008 financial crisis, where banks played “follow the leader” based on information from credit raters.
“It’s frankly a hard challenge,” Gensler told the Financial Times. “It’s a hard financial stability issue to address because most of our regulation is about individual institutions, individual banks, individual money market funds, individual brokers; it’s just in the nature of what we do. And this is about a horizontal [matter whereby] many institutions might be relying on the same underlying base model or underlying data aggregator.”
In July, the SEC proposed rules around the use and oversight of predictive data analytic (PDA) tools, specifically focusing on preventing potential conflicts of interest in using such tools, particularly PDA models used by broker dealers and investment advisers.
Other voices of concern
The Biden Administration issued its Blueprint for an AI Bill of Rights last October, which sets out five principles as ones to guide the design, use, and deployment of automated systems to protect the American public in the age of AI.
The Consumer Financial Protection Bureau requires explanations for credit denials from AI systems. And the Equal Employment Opportunity Commission has the authority to require a non-AI alternative for people with disabilities and enforce non-discrimination in AI hiring.
And the FTC, CFPB, EEOC and the Department of Justice released a joint statement, outlining a commitment to enforce their respective laws and regulations to promote responsible innovation in automated systems.
On October 30, President Biden signed an ambitious executive order on artificial intelligence that seeks to balance the needs of cutting-edge technology companies with national security and consumer rights, creating an early set of guardrails that could be fortified by legislation and global agreements.
“It’s a hard financial stability issue to address because most of our regulation is about individual institutions, individual banks, individual money market funds, individual brokers; it’s just in the nature of what we do.”
Gary Gensler, Chair, SEC
The order requires leading AI developers to share safety test results and other information with the government. The National Institute of Standards and Technology is to create standards to ensure AI tools are safe and secure before public release, and the Commerce Department is to issue guidance to label and watermark AI-generated content to help differentiate between authentic interactions and those generated by software.
And in November, the Federal Trade Commission (FTC) voted 3-0 to approve an omnibus resolution authorizing the use of compulsory process in nonpublic investigations involving products and services that use or claim to be produced using AI or claim to detect its use. The resolution will make it easier for FTC staff to issue civil investigative demands, which are a form of compulsory process similar to a subpoena, in investigations relating to AI, while retaining the agency’s authority to determine when such demands are issued.
Investment adviser concerns
Investment advisers have a fiduciary duty obligation that governs their interactions with investors, including interactions that involve the use of AI and other technologies. How IAs can navigate their fiduciary duty obligations while using the technology will involve careful oversight and documentation of the processes used to ensure investor protection.
This is especially true in those situations in which customers are considered vulnerable clients and might need added levels of monitoring to ensure they are receiving product recommendations consistent with their level of understanding and their risk tolerances.
Additionally, in using automating advisory tools, such as robo-advisory ones, the type of oversight needed as this automation scales up with further AI enhancements will be significant. IA firms will need to be quite clear how these tools are recommending and selecting products, plus actively targeting and communicating with individual retail investors, outside the confines of an authentic and personal IA-customer relationship.
And if consent plus sufficient disclosure in the past has been helpful to help get around potential conflicts of interest issues the SEC might have had, such as consent and full and fair disclosure, will the general populace have the ability to understand the complexity of the technology to give full consent?
And when it comes to brokers, Reg BI says brokers must identify, eliminate or mitigate conflicts, but the SEC’s rule proposal on PDA tools calls for eliminating or neutralizing them, which is a “conflict” the regulator must eliminate or explain to registered businesses.