“AI is the most transformative tech of our time,” said SEC chair Gary Gensler in the first post in a thread published on social media platform X, formerly known as Twitter, this week. The thread gives the latest indication of what the regulator’s approach to the technology might be.
Gensler says: “It is important to focus on the challenges of AI. Given that we’re dealing with automation of human intelligence, the gravity of these challenges is real.”
One problem he foresees is that “If the optimization function in the AI system is taking the interest of the platform into consideration as well as the interest of the customer, this can lead to conflicts of interest.”
He emphasises the SEC’s approach is “technology neutral”, and concludes: “Securities laws, though, may be implicated depending upon how AI tech is used. Within our current authorities, we’re focused on protecting against both the micro & macro challenges of AI”.
Gensler’s comments have, perhaps predictably, been interpreted as ‘taking aim at AI’ by some in the crypto community. But Gensler has been examining the implications of the rise in AI for some time, and more on his thinking can be found in a recent New York Times interview.
Financial stability
His comments draw on conclusions from a paper he wrote in 2020 called Deep Learning and Financial Stability. He said “the technology will be the center of future crises” because history suggested that a small number of platforms will build the foundations that underpin AI tools. The deepening of interconnections would make a financial crash more likely, because everyone will be relying on the same information and so will respond to it in similar ways.
Gensler says we have to consider whether the use of AI to study investor behaviour will be used to prioritise user interests or not. “You’re not supposed to put the adviser ahead of the investor, you’re not supposed to put the broker ahead of the investor,” he said. That’s why the SEC has proposed a new rule requiring firms using AI to eliminate conflicts of interest.
The question of who would be responsible for faulty financial advice given as a result of deploying AI is easily solved, Gensler thinks. He talks about the fiduciary duty advisers have under law and, whether you’re using an algorithm or not, “you have that same duty of care”.