In a recent video, SEC Chair Gary Gensler addressed the risks of conflicts of interest that AI can create in the financial services sector, especially in the cases of robo-advisers and other predictive data analytics (PDA) services that involve proffering recommendations to clients.
While AI has grown extremely adept at training itself on user data, its appeals to subtle user-specific traits in its recommendations can make machine learning programs seem like they are doing more than they are to optimize material investment factors for clients.
According to Gensler, this tendency opens the door to having a conflict of interest between what the AI model is doing for client acquisition and retention and whether that aligns with the client’s actual best interest.
“AI models’ decisions and outcomes often are unexplainable … The math is nonlinear and hyper-dimensional, from thousands to potentially billions of parameters.”
Gary Gensler, Chair, SEC
How Regulation Best Interest – a heightened standard of fiduciary duty introduced in 2020 – will be upheld in the emerging AI landscape is of key concern to SEC regulators. Gensler has endorsed a proposed rule issued last year that would prohibit the use of PDA technology that puts firms ahead of clients.
The proposed rule would require financial institutions to adopt internal policies, and comply with stringent recordkeeping requirements to make sure they are fully aware of how their PDAs work in practice.
Much of the rule seems designed to curb AI’s tendency to run away from the constraints of its original parameters. As Gensler stated in remarks to the National Press Club: “AI models’ decisions and outcomes often are unexplainable. Part of this is inherent to the models themselves. The math is nonlinear and hyper-dimensional, from thousands to potentially billions of parameters. It’s dynamic with the ability to change and learn from new data and from the model’s use.”
As the use of AI continues to reshape the regulatory landscape, it is likely that efforts will be intensified to keep track of what AI is doing in practice.
Other AI issues
Other AI issues Gensler has been a vociferous critic of include the development of an AI “monoculture” where multiple financial institutions will be dependent on a small number of AI models, creating a risk for large-scale failure.
Gensler has also criticized “AI washing,” a practice in which financial firms will deceptively advertise the involvement of AI in investment decisions where in reality there is little or no material AI involvement. This practice has been the focal point of several notable SEC enforcement actions in recent months.