At the National Society for Compliance Professionals’ annual conference on Tuesday, three panelists presented on a topic of great concern and much recent debate and discussion: Generative AI. And, more specifically, the implications of such artificial intelligence tools for compliance officers and their employers.
Alex Gavis, special counsel at Eversheds Sutherland, moderated the panel, introducing to us to his fellow panelists, Vall Herard, CEO and co-founder of Saifr, and Jasmin Sethi, founder and CEO of Sethi Clarity Advisers.
Getting to know AI
Herard explained from the outset that generative AI is just something that uses a lot of data (and a computer) to make sense of data (often, to build an algorithm). And that “generative AI” is a subset of AI and is a technology that learns the patterns and structure of its input training data and then generates new data that has similar characteristics.
“You train the model with a lot of data,” he said. The tech learns the sequences of words that normally appear together and can then anticipate what word would come next in a sequence – so “the cat chased a mouse,” would be more predictable than “the cat chased an elephant”.
Sethi told the audience from the outset that she has an appreciation for the power and benefits of AI, since she is a financial adviser who happens to be blind. She explained how AI developments have been incredibly useful to her, noting that she already missed her Alexa at home.
She also explained what SEC Commissioner Hester Peirce said in a written statement recently (a topic we’ll come back to): AI encompasses any automation, so it could include your automated processes for onboarding new hires at your firm, for instance. It’s a sweeping term.
AI and the compliance officer
The panelists noted the many benefits AI brings to compliance departments already. It can help businesses predict risk given the data it possesses right now – regulatory risk, liquidity risk, etc; track customer complaints; develop communications with the public; monitor trading activity; spot elder abuse in client transactions; root out insider trading; prompt you to regulatory changes to help manage them …
“At all times, businesses need to look at their data and make sure something like Regulation Best Interest is not being violated because of something an AI tool is spitting out.”
Vall Herard, co-founder, Saifr
Each of the panelists emphasized a need for people with compliance knowledge to integrate the AI into business operations and into compliance specifically. Such professionals will be better able to appreciate policy and data governance issues, the need for transparency in how and when AI is being used; issues of environmental and societal well-being; who is accountable for inputs and interpretations of the data (outputs) and issues of bias and discrimination. The need for constant human oversight was stressed throughout the panel discussion.
(Each of those seven challenges come from the EU’s Expert Group on AI, which the White House’s Office of Science and Technology Policy has adopted.)
The compliance team is also best poised to appreciate that what the AI generates must be consistent with regulatory imperatives. “AI tools have hallucinations,” Herard said. “When they start filling in gaps in data with made-up data, this makes them a big risk for firms that are not vigilant. At all times, businesses need to look at their data and make sure something like Regulation Best Interest is not being violated because of something an AI tool is spitting out,” he said.
He said that bias is always a risk, since the data comes from humans who can be biased. It will take reams of housing data – some of which contain data premised on discriminatory lending – and now that discriminatory data is a part of the AI tool’s overall data bank.
“AI can be great for writing job descriptions and for reviewing resumes, but if women don’t often apply for jobs requiring 10 years of experience when they have only five – but many more men do that – the inherent discrepancy (or bias) is baked into the data,” Herrard said.
SEC’s predictive data analytics rule proposal
In July, the SEC proposed rules regarding predictive data analytics with the stated goal of helping investment firms guard against conflicts of interest when they use artificial intelligence, predictive analytics, and other technologies in their dealings with retail investors.
The securities regulator proposed rules that would require broker-dealers and investment advisers to assess whether their use of certain technologies risks prioritizing their interests over their clients’ interests — and to avoid or ameliorate these kinds of conflicts.
“AI can encompass an Excel spreadsheet,” Sethi observed. “And it can be a chatbot. And there is a lot in between those two things.”
Jasmin Sethi, founder and CEO, Sethi Clarity Advisers
Noting SEC Commissioner Peirce’s strong dissent regarding the rule proposal, the panelists said that the agency failed to distinguish across AI and acknowledge its many incarnations.
“AI can encompass an Excel spreadsheet,” Sethi observed. “And it can be a chatbot. And there is a lot in between those two things.”
The SEC received many comment letters in response to the proposal that touched on the rule’s unintended consequences, Herard noted, thanks to its sweeping provisions that treated the vast array of predictive analytic tools as being equally risky. “Pretty much any Microsoft product these days will have AI embedded in it,” he said.
Managing risk
So could compliance departments using AI tools over-rely on them? That is a risk compliance officers must consider.
“A self-driving car sees a ball skid across the road,” Herard says. “We humans know enough to also look out for the kid or maybe pet that could be following the ball. The car, on the other hand, does not know that. People add much-needed context to the analysis of data, and compliance teams must appreciate the assumptions behind any results might be missing something.”
It could also point out areas where bias is occurring, by locating patterns in, let’s say, where money typically ends up with VC funding.
Compliance teams should be prepared to be skeptical and do a lot of quality checking, using their human judgment skills to make sure the data is ready for use.
Sethi pointed out that AI is amazing for making us more productive, since it does not take a day off or get sick. It can pick up the load of tedious tasks that compliance (and other corporate) departments labor under. And its potential for helping people with disabilities and aid businesses in meeting American with Disabilities Act compliance is tremendous.
It could also point out areas where bias is occurring, by locating patterns in, let’s say, the types of startups where venture capital funding typically ends up.
But the need for human judgment in its use is irreplaceable.
“Trust but verify,” Herard said, quoting and referring to President Ronald Reagan. “Meaning, test your data. Even being wrong 10% of the time is way too often,” he said.