CFPB urged by consumer groups to take action on banks’ use of algorithms
Two of the country’s top consumer advocacy organizations are urging the Consumer Financial Protection Bureau to issue guidance on how banks use algorithms to inform credit decisions, a practice that they say could erect barriers to access for underserved populations without proper guardrails.
In a letter sent last week, Consumer Reports and the Consumer Federation of America asked CFPB Director Rohit Chopra for “regulatory clarity” on how financial institutions search for and then implement less discriminatory algorithms in their underwriting work and pricing choices.
The letter’s authors — Jennifer Chien, senior policy counsel for financial fairness at Consumer Reports, and Adam Rust, director of financial services at the Consumer Federation of America — wrote that their request to the CFPB was prompted by a surge in banks’ adoption in recent years of machine learning models and other algorithmic decision-making tools.
“While these advancements have the potential to enhance efficiency and advance financial inclusion, there is growing evidence that they can also perpetuate and exacerbate existing and historical biases, leading to discriminatory outcomes that adversely affect marginalized and underserved communities,” the letter stated.
The best course of action going forward, Chien and Rust wrote, would be for the CFPB to “provide clear guidance on how lenders should search for and implement less discriminatory alternatives when using algorithms for credit underwriting and pricing,” adding that that guidance is “needed to complement supervisory and enforcement action” from the agency.
The CFPB confirmed to FedScoop that it received the letter but declined to comment further. The Bank Policy Institute, an influential trade group representing dozens of banking giants and regional lenders, did not respond to a request for comment by the time of publication.
Chopra has spoken frequently about risks to consumers posed by AI, including the technology’s potential for concentrating economic power and the need for regulatory guardrails around credit decisions. Last September, the CFPB released guidance on lenders’ use of AI in issuing credit denials and followed that last month with the approval of a rule on how AI can be used to determine home appraisals.
“I am really worried about these foundational AI models,” Chopra said in an exchange with Sen. Mark Warner, D-Va., during a Senate Banking Committee hearing last month. “I think there will probably be just a handful of them, for which most of the industry is built on top of that. And when there are problems with one of those foundational models, we could really see issues that occur throughout sectors of the economy, including the financial system.”
Despite Chopra’s many public warnings about the technology and the agency’s flurry of AI-related activity, the consumer groups said in their letter that those policy measures are “lacking” when it comes to guidance on testing for and guarding against discrimination in machine learning models for lending.
Chien and Rust lauded the CFPB’s “important and concrete” work on a swath of AI topics while noting that the agency “has yet to clearly and concretely address critical issues such as what obligations lenders have to address the risk of discrimination and disparate impact, how they can identify and measure disparate impact, when they should search for alternative models, and how they can establish and maintain robust compliance management systems with these goals in mind.”
The authors noted in their letter that some financial institutions may be building accurate and fair AI and/or ML models that utilize the proper methods to search for less discriminatory algorithms, but it’s also probable that others won’t pursue testing and LDA searches with the same level of rigor. The CFPB, Chien and Rust wrote, “should provide financial institutions with guidance on how to search for and implement LDAs.”
Rayid Ghani, a professor of AI at Carnegie Mellon University, said the type of guidance sought by Consumer Reports and the Consumer Federation of America is important given that the use of algorithms and searches for LDAs in the banking industry is “not a well-understood space.” Establishing an “industry standard” that “helps with enforcement” is a reasonable request for the CFPB, he added.
But any guidance or regulatory clarity should take into account the fact that an algorithm isn’t “necessarily making autonomous decisions,” said Ghani, who also served as chief scientist on President Barack Obama’s 2012 campaign. A loan officer, he said, is likely to play a role in either agreeing with or overriding an AI model’s recommendation on credit.
“If you know the deficiencies of your AI model, you need to have a business process in place that can mitigate that, as opposed to totally rely on that, regardless of its performance,” Ghani said.
Absent guidance for lenders of all sizes, the consumer groups said they’re concerned that industry “disruptors” will push forward with “dangerous” AI and ML models, while small and medium-sized banks fall behind and miss out on the technological opportunities to ensure access to affordable credit for underserved consumers.
“While these models offer important potential benefits, they can reinforce and make worse existing and historical biases that prevent communities of color and low-income consumers from accessing affordable credit,” Chien said in a statement. “As AI decision-making advances rapidly, the CFPB should provide clear guidance to ensure lenders treat consumers fairly and protect them from algorithmic discrimination.”