Gensler spoke at Yale Law School about artificial intelligence, noting its potential benefits of helping to provide greater financial inclusion and enhanced user experience. He also expressed deep concern about the system-wide risks it presents, the perils of AI-washing, and the possibility of conflicts of interest introduced by the technology.
Gensler said we have seen in our economy how one or a small number of tech platforms can come to dominate a field. There’s one leading search engine, one leading retail platform, and three leading cloud providers. “I think due to the economies of scale and network effects at play we’re bound to see the same develop with AI,” he said.
Such a development would promote both herding and network interconnectedness, and such monocultures are the classic problems that lead to systemic risk, Gensler said. “Current model risk management guidance – generally written prior to this new wave of data analytics – will need to be updated,” he noted.
Fraud
Gensler worries about the potential for fraudulent behavior, such as front-running, or trading in front of one’s customer’s orders, or spoofing, which is placing a fake order, as not doing these things depends on both the algorithms used by AI tools, but also on the humans who deploy these models and who need to put in place appropriate guardrails in using them.
Those guardrails must take into account current market conditions, laws and regulations and be tested on a recurring basis and monitored. Instead of disclosing risks using “boilerplate” language about AI, Gensler said, they should craft specific disclosures that speak to those risks.
Conflicts
Today’s AI-based models provide an increasing ability to make predictions about each of us as individuals; AI recommender systems already consider how we might as individuals respond to their prompts, products, and pricing, Gensler notes.
“If a company is raising money from the public, though, it needs to be truthful about its use of AI and associated risk.”
SEC Chair Gary Gensler
Depending on how it is programmed, the tool could recommend certain products to an investor because it might benefit the firm’s revenues, profits, or other interests and provide less benefit to the individual.
“That’s why the SEC proposed a rule last year regarding how best to address such potential conflicts across the range of investor interactions,” he said. The rule proposal targeted predictive data analytic tools and imposed new requirements on securities firms using such technologies to have policies and procedures designed to address, eliminate or neutralize potential conflicts and recordkeeping processes to track their deployment and effectiveness.
AI washing
Investment advisers or broker-dealers should not mislead the public by saying they are using an AI model when they are not, Gensler says. Nor should they say they are using an AI model in a particular way but not do so. “Such AI-washing, whether it’s by companies raising money or financial intermediaries, such as investment advisers and broker-dealers, may violate the securities laws,” Gensler said.
“We’ve seen time and again that when new technologies come along, they can create buzz from investors as well as false claims,” Gensler told the audience. “If a company is raising money from the public, though, it needs to be truthful about its use of AI and associated risk.”
In a December 2023 panel discussion in which he discussed the SEC’s proposed predictive analytics rules, Gensler warned firms not to misrepresent their AI capabilities. “One shouldn’t greenwash, and one shouldn’t AI-wash,” he said. “If you’re raising money from the public, if you’re offering and selling securities, you come under the securities laws and give full, fair and truthful disclosure, and then investors can decide.”
The Federal Trade Commission, Department of Justice, Equal Employment Opportunity Commission and Consumer Financial Protection Bureau (CFPB) issued a joint statement last April, warning advertisers “not to overpromise what your algorithm or AI-based tool can deliver” lest they violate consumer protection law. The CFPB on its own has warned financial institutions they “run the risk” of noncompliance with existing federal consumer financial laws when “chatbots ingest customer communications and provide responses … [that] may not be accurate”.
Although there is much uncertainty surrounding the future course of AI legislation in the United States, one best practice use in the current regulatory environment is clear – firms should avoid making misleading claims regarding their use of AI, much as they must try to avoid making false claims about their environmentally friendly products and services.