Skip to main content

Director Chopra’s Prepared Remarks on the Interagency Enforcement Policy Statement on “Artificial Intelligence”

In recent years, we have seen a rapid acceleration of automated decision-making across our daily lives. Throughout the digital world and throughout sectors of the economy, so-called “artificial intelligence” is automating activities in ways previously thought to be unimaginable.

Generative AI, which can produce voices, images, and videos that are designed to simulate real-life human interactions are raising the question of whether we are ready to deal with the wide range of potential harms – from consumer fraud to privacy to fair competition.

Today, several federal agencies are coming together to make one clear point: there is no exemption in our nation’s civil rights laws for new technologies that engage in unlawful discrimination. Companies must take responsibility for their use of these tools.

The Interagency Statement we are releasing today seeks to take an important step forward to affirm existing law and rein in unlawful discriminatory practices perpetrated by those who deploy these technologies.1

The statement highlights the all-of-government approach to enforce existing laws and work collaboratively on “AI” risks.

Threats Posed by So-Called “Artificial Intelligence”

Unchecked “AI” poses threats to fairness and to our civil rights in ways that are already being felt.

Technology companies and financial institutions are amassing massive amounts of data and using it to make more and more decisions about our lives, including whether we get a loan or what advertisements we see.

While machines crunching numbers might seem capable of taking human bias out of the equation, that’s not what is happening. Findings from academic studies and news reporting raise serious questions about algorithmic bias. For example, a statistical analysis of 2 million mortgage applications found that Black families were 80 percent more likely to be denied by an algorithm when compared to white families with similar financial and credit backgrounds. The response of mortgage companies has been that researchers do not have all the data that feeds into their algorithms or full knowledge of the algorithms. But their defense illuminates the problem: artificial intelligence often feels like black boxes behind brick walls.2

When consumers and regulators do not know how decisions are made by artificial intelligence, consumers are unable to participate in a fair and competitive market free from bias.

CFPB Actions to Protect Consumers

That’s why the CFPB and other agencies are prioritizing and confronting digital redlining, which is redlining caused through bias present in lending or home valuation algorithms and other technology marketed as artificial intelligence. They are disguised through so-called neutral algorithms, but they are built like any other AI system – by scraping data that may reinforce the biases that have long existed.

We are working hard to reduce bias and discrimination when it comes to home valuations, including algorithmic appraisals. We will be proposing rules to make sure artificial intelligence and automated valuation models have basic safeguards when it comes to discrimination.

We are also scrutinizing algorithmic advertising, which, once again, is often marketed as “AI” advertising. We published guidance to affirm how lenders and other financial providers need to take responsibility for certain advertising practices. Specifically, advertising and marketing that uses sophisticated analytic techniques, depending on how these practices are designed and implemented, could subject firms to legal liability.

We’ve also taken action to protect the public from black box credit models – in some cases so complex that the financial firms that rely on them can’t even explain the results. Companies are required to tell you why you were denied for credit – and using a complex algorithm is not a defense against providing specific and accurate explanations.

Developing methods to improve home valuation, lending, and marketing are not inherently bad. But when done in irresponsible ways, such as creating black box models or not carefully studying the data inputs for bias, these products and services pose real threats to consumers’ civil rights. It also threatens law-abiding nascent firms and entrepreneurs trying to compete with those who violate the law.

I am pleased that the CFPB will continue to contribute to the all-of-government mission to ensure that the collective laws we enforce are followed, regardless of the technology used.

Thank you.

Footnote

  1. Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems
  2. Remarks of Director Rohit Chopra at a Joint DOJ, CFPB, and OCC Press Conference on the Trustmark National Bank Enforcement Action | Consumer Financial Protection Bureau (consumerfinance.gov)