CFPB issues new spotlight on AI chatbots in banking

The CFPB’s Rohit Chopra said “a poorly deployed chatbot can lead to customer frustration, reduced trust, and even violations of the law.”

On Tuesday, the US Consumer Financial Protection Bureau (CFPB) released a new issue spotlight on the expansive adoption and use of chatbots by financial institutions. The CFPB said it has received numerous complaints from frustrated customers trying to receive timely, straightforward answers from their financial institutions or raise a concern or dispute.

“To reduce costs, many financial institutions are integrating artificial intelligence technologies to steer people toward chatbots,” said CFPB Director Rohit Chopra. “A poorly deployed chatbot can lead to customer frustration, reduced trust, and even violations of the law.”

Chatbot tech

Chatbots simulate human-like responses using computer programming, and institutions often use them to reduce the costs of customer service agents. These chatbots sometimes have human names and use popup features to encourage engagement. Some chatbots use more complex technologies marketed as “artificial intelligence,” to generate responses to customers.

Approximately 37% of the US population is estimated to have interacted with a bank’s chatbot in 2022, the CFPB reported, mainly to help customers retrieve account balances, look up recent transactions, and pay bills.

Much of the industry uses simple rule-based chatbots with either decision tree logic or databases of keywords or emojis that trigger preset, limited responses or route customers to Frequently Asked Questions, the CFPB said.

Other institutions have built their own chatbots by training algorithms with real customer conversations and chat logs, like Capital One’s Eno and Bank of America’s Erica. More recently, the banking industry has begun adopting advanced technologies, such as generative chatbots, to support more intricate customer service needs.

The CFPB warns businesses to avoid using chatbots as their primary customer service delivery channel when it is reasonably clear that the chatbot is unable to meet customer needs.

Risks

The agency’s spotlight found the use of chatbots raised several risks, including:

  • Noncompliance with US consumer financial protection laws. Financial institutions run the risk that when chatbots ingest customer communications and provide responses, the information chatbots provide may not be accurate, the technology may fail to recognize that a consumer is invoking their federal rights or to protect their privacy and data.
  • Diminished customer service and trust. When consumers require assistance from their financial institution, the circumstances could be dire and urgent. Instead of finding help, consumers can face repetitive loops of unhelpful jargon. Chatbot interactions can diminish their confidence and trust in their financial institutions.
  • Harm to consumers. When chatbots provide inaccurate information regarding a consumer financial product or service, there is potential to cause considerable harm. It could lead the consumer to select the wrong product or service. There could also be an assessment of fees or other penalties should consumers receive inaccurate information on making payments.

The CFPB warned businesses it is monitoring the market, and expects institutions using chatbots to do so in a manner consistent with their customer and legal obligations, and it encouraged customers experiencing problems with the technology to submit a consumer complaint with the CFPB.