Shared responsibility key to developing the trust we need in AI, says Michael Hsu

Speech to Financial Stability Oversight Council tackles the tricky question of building trust in the coming wave of AI.

“With AI, it is easier to disclaim responsibility for bad outcomes than with any other technology in recent memory.” So said Michael Hsu, Acting Comptroller of the Currency, in a thoughtful speech delivered to the Financial Stability Oversight Council’s AI Conference recently. His remarks were a welcome change of perspective in a conversation that so often tends to focus on racing – whether that’s racing to develop models or racing to adopt and deploy them.

Hsu sees trust as the key factor to consider, and he told the conference that: “Trust not only sits at the heart of banking, it is likely the limiting factor to AI adoption and use more generally.” It’s a neat and concise observation that gets to the heart of what is an existential tension of modern times. While a recent McKinsey survey of 1,363 managers across the world put global adoption of generative AI at 72% of businesses and organizations, a Salesforce survey of 6,000 workers in the global knowledge economy found 54% of AI users saying they don’t trust the data used to train AI systems, with three in four of that number believing AI lacks the information it needs to be useful.

In short, we know we have to get to grips with AI, but we don’t trust it.

Overly rapid adoption of AI

Sean Knapp, CEO of data engineering platform Ascend, told Forbes: “AI is only as good as the data that’s backing it,” adding that businesses need to understand that “just because AI will give them an answer, that doesn’t mean it will be accurate.” In the same article, Jonathan Bruce, vice president of data intelligence firm Alation, said: “You need to slow down to go fast.” And that’s a theme Hsu developed in his speech.

“AI holds the promise of doing things better, faster, and more efficiently, yielding benefits for individuals, managers, organizations, and the public,” he said. But “if the past is any guide, the micro- and macro-prudential risks from such uses will emanate from overly rapid adoption with insufficiently developed controls. What starts off as responsible innovation can quickly snowball into a hyper-competitive race to grow revenues and market share, with a ‘we’ll deal with it later’ attitude toward risk management and controls.”

His solution? “Identify in advance the points at which pauses in growth and development are needed to ensure responsible innovation and build trust.” He says it is useful to consider how electronic trading developed, identifying three key phases of evolution that AI also appears to be following: “It is used at first to produce inputs to human decision-making, then as a co-pilot to enhance human actions, and finally as an agent executing decisions on its own on behalf of humans.”

“Banks should ensure that proper controls are in place and accountability is clearly established.”

Michael Hsu, Acting Comptroller of the Currency

So, he said: “For banks interested in adopting AI, establishing clear and effective gates between each phase could help ensure that innovations are helpful and not dangerous. Before opening a gate and pursuing the next phase of development, banks should ensure that proper controls are in place and accountability is clearly established.”

These remarks addressed the challenge of dealing with AI’s use as a tool, and Hsu made it clear that, in his view, businesses also need to prepare to deal with AI’s use as a weapon – for example in facilitating fraud and operational disruption. He said that “an increase in AI-powered fraud could sow the seeds of distrust more broadly in payments and banking.”

Accountability is key to Hsu’s vision of how the issue of trust is to be tackled, and he uses an interesting recent case from the Canadian legal system as a point of reference – a case that Matt Kelly wrote about at some length on Radical Compliance. In summary, a customer of Air Canada asked the company’s chatbot what the bereavement rate for a flight to Toronto was. The customer was advised to book the flight and apply for the rate retrospectively. But Air Canada prohibits retroactive refunds for bereavement flights – the chatbot gave the wrong advice.

Seperate legal entity

The customer sued after being refused a refund. Air Canada said it could not be held responsible for the information provided by the chatbot, which it argued was a “separate legal entity that is responsible for its own actions.” The Canadian Civil Resolution Tribunal disagreed with that argument and ruled in the customer’s favour.

That case, Moffat v Air Canada, established that companies are liable for information provided by a chatbot, a position that seems eminently sensible. However, said Hsu, if it is considered from a management perspective, it becomes more complex. If a company web page is faulty, or an employee makes a mistake, it is relatively straightforward to identify the reasons for the fault and to put measures in place to prevent a recurrence. But, he says: “With a black box chatbot that is powered by third parties, most companies are likely to struggle to identify whom to hold accountable for what or how to fix it.”

To find an answer, he looks to the shared responsibility model common in the cloud computing space. “In the cloud computing context, the ‘shared responsibility model’ allocates operations, maintenance, and security responsibilities to customers and cloud service providers depending on the service a customer selects,” he says. “A similar framework could be developed for AI.”

To enforce this model, he sees agencies such as the OCC and the FSOC as having a key role in “facilitating the discussions and engagement needed to build trust.” That is vital because, as Kelly says on Radical Compliance, “if we can’t hold parties accountable for AI gone wrong, nobody will trust it – and then, why are we bothering with any of this stuff at all?”