Four US agencies issue joint statement on countering AI bias

Pledge to ‘root out discrimination caused by any tool or system that enables unlawful decision making’.

On Tuesday, four US agencies issued a joint pledge to enforce their laws and regulations to monitor artificial intelligence (AI) and enforce the core principles of fairness, equality and justice in the rollout and use of such AI products and services.

The Civil Rights Division of the DOJ, the Consumer Financial Protection Bureau (CFPB), the Federal Trade Commission (FTC), and the US Equal Employment Opportunity Commission (EEOC) outlined their commitment after noting how often automated systems present themselves in our daily lives, thereby affecting civil rights, fair competition, consumer protection, and equal opportunity.

“Today’s joint statement makes it clear that the CFPB will work with its partner enforcement agencies to root out discrimination caused by any tool or system that enables unlawful decision making,” said CFPB Director Rohit Chopra in the press release announcing the joint statement.

“Claims of innovation must not be cover for lawbreaking. There is no AI exemption to the laws on the books.”

Lina M Kahn, director, FTC

“Technological advances can deliver critical innovation – but claims of innovation must not be cover for lawbreaking. There is no AI exemption to the laws on the books, and the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition,” said FTC director Lina M Kahn.

“As social media platforms, banks, landlords, employers, and other businesses that choose to rely on artificial intelligence, algorithms and other data tools to automate decision-making and to conduct business, we stand ready to hold accountable those entities that fail to address the discriminatory outcomes that too often result,” said Assistant Attorney General Kristen Clarke of DOJ’s Civil Rights Division.

Prior guidance

Over the past three years, these agencies have issued a number of guidance documents, interpretive rules, and enforcement actions to drive home the message they jointly issued above.

The FTC has explained how its prohibition on deceptive or unfair conduct can apply if a person or entity makes, sells or uses a tool that is effectively designed to deceive, even if that is not its intended or sole purpose. Specifically, it warns businesses against creating chatbots that can be misused for fraud or other harm, advising companies to closely monitor such risk, create deterrence features that are built into the technology, and to not rely on consumers to find out its generative AI tools can be or are being used fraudulently.

Earlier this month, the CFPB issued a policy statement to explain its prohibition of any abusive use of AI technology to obscure important features of a product or service or leverage gaps in consumer understanding.

And the agency has created standards to guard against digital redlining, seeking to protect against the use of generative technology from algorithmic bias such that they adjust home valuations and appraisals in a biased way.

Project Gretsky

In September 2022, Assistant Attorney General for the DOJ’s Antitrust Division, Jonathan Kanter, said the DOJ is bringing in more expertise to better understand digital platforms, saying his agency is calling its AI effort “Project Gretsky” – a nod to the hockey legend who was known for a line about “skating to where the puck is going”.

In 2021, EEOC Chair Charlotte A Burrows launched an agency-wide initiative to ensure that AI, machine learning, and other emerging technologies used in hiring and other employment decisions comply with the federal civil rights laws that the EEOC enforces. Through the initiative, the EEOC says it will guide employers, employees, job applicants, and vendors to ensure that these technologies are used fairly and consistently with federal equal employment opportunity laws.

Tech tools

The joint statement arrives just as ChatGPT and its growing list of competitor chatbots, and digital assistants, epayment tools, facial detection apps, and recommendation algorithms (and more) are being embraced by businesses seeking to use and provide faster, easier, and portable tech tools to cull through and better manage reams of data.

The agency heads quoted in the statement acknowledge that these tools can be hugely effective at processing huge amounts of information. And they are often accurate and used free of bias and fraud.

But the statement is also a warning to those leveraging the technology that the growth and application of AI won’t escape regulatory scrutiny, across all industries and applications. And that enforcement authorities expect businesses to maintain rigorous monitoring and supervisory processes around the use and deployment of the tools and the data generated by them.