Introduction
When Rachelle Faroul, a thirty-three-year-old Black woman, applied for a loan from Philadelphia Mortgage Advisers in April 2016, she didn’t foresee any problems.
She was a Northwestern University graduate, had good credit and a decent amount of savings, and at the time was making approximately $60,000 a year as a computer programming instructor at Rutgers University.
But Philadelphia Mortgage Advisers denied her initial loan application, citing that her contract income was inconsistent.
Rachelle persisted and got a full-time job at the University of Pennsylvania and again applied for a loan, this time with Santander Bank.
Santander Bank also denied her application.
By that point, Rachelle had been trying to get a mortgage loan for over a year, and the several hard inquiries from the various lenders she had sought a loan from had lowered her credit score.
It wasn’t until Rachelle’s partner, who is half-white and half-Japanese and was then working part-time at a grocery store, agreed to sign on to the loan application that Rachelle was finally approved for the loan.
While Rachelle spent over a year attempting to get her loan approved, Jonathan Jacobs—another loan applicant—was approved for his loan from TD Bank soon after filling out the paperwork, which took him all of about fifteen minutes.
He had almost no savings, a modest income, and a less-than-stellar credit report.
But Jonathan is white.
Such stark racial disparities are not unique to Rachelle’s and Jonathan’s stories or to Philadelphia.
In 2018, Reveal from the Center for Investigative Reporting conducted a study of thirty-one million mortgage records covering nearly every time an American sought a conventional mortgage loan in 2015 and 2016.
The analysis revealed that, even after controlling for several economic and social factors, Black applicants were almost three times more likely than white applicants to be denied a conventional home purchase loan.
Reveal also reported that lenders acknowledged the disparate impact of lending industry practices on people of color, but that lenders also claimed that the racial disparity can be explained by factors the industry has kept hidden from the public, such as credit scores.
Housing discrimination, however, is not a recent problem; in fact, it has a long, sordid history in the United States. Ongoing housing discrimination throughout the twentieth century ultimately prompted Congress to act by passing the Fair Housing Act of 1968 (FHA), which prohibits discrimination in the sale, rental, and financing of housing on the basis of race, color, religion, sex, national origin, disability status, and familial status.
But outlawing only overtly discriminatory practices would not adequately address the country’s long history of housing discrimination. Consequently, courts and government agencies began applying the disparate impact framework first developed in the employment context in Griggs v. Duke Power Co. to housing discrimination claims.
While all eleven federal circuit courts to consider the question recognized disparate impact claims under the FHA,
it was not until 2015 that the Supreme Court of the United States formally held in Texas Department of Housing and Community Affairs v. Inclusive Communities Project, Inc. that disparate impact claims are cognizable under the FHA.
The holding was largely perceived as a win for fair housing activists because it acknowledged liability for unintentional or covert discrimination under the FHA.
Recent technological advancements, however, have raised questions about the FHA’s reach. Once thought only marginally possible to achieve, artificial intelligence is now ubiquitous. Among other things, artificial intelligence is used today to estimate a defendant’s likelihood of committing a future crime,
predict what content you want to see on Netflix and on your Facebook Feed,
perform robot-assisted surgery,
power the spam filter in your inbox,
deposit checks through your bank’s smartphone app,
and even place students in schools.
The pervasiveness of artificial intelligence is also changing the development of the housing market. Fifty years ago, when the Fair Housing Act of 1968 was passed, Congress could not have imagined how technological advances would impact the housing market and the ability of certain groups of people to access it. Today, artificial intelligence technology and big data play an important role in housing access, as landlords and lenders increasingly rely on predictive analytics to evaluate applicants.
More specifically, the lending industry has increasingly relied on big data and algorithmic decisionmaking to evaluate the creditworthiness of consumers.
While the use of such predictive techniques by lenders may mitigate consumer lending credit risk, it is not without its perils. The accuracy of an algorithm model is only as good as the data inputs used to train it,
and data inputs based on a programmer’s implicit biases can create a discriminatory algorithm that results in unfair lending practices.
Federal housing laws in the United States, however, have failed to catch up to technology. The FHA makes no mention of technology generally or artificial intelligence specifically and does not address fair lending violations by way of predictive analytics, despite the widespread use of proprietary or third-party algorithmic models in many credit-scoring systems. A recently proposed rule (Proposed Rule) from HUD has exposed this gap in the law.
HUD purports that the Proposed Rule—the first federal regulation to directly address disparate impact and algorithms—is aimed at aligning HUD’s regulations with the Court’s interpretation of the FHA in Inclusive Communities.
In practice, however, the Proposed Rule will allow lenders to circumvent liability for algorithmic discrimination, in violation of the Fair Housing Act and fair lending laws, by substantially raising the burden of proof for parties claiming discrimination and creating seemingly insurmountable defenses for lenders accused of algorithmic disparate impact discrimination.
This Note focuses on the gap in statutory accountability within the FHA for disparate impact discrimination arising from algorithmic decisionmaking in the lending industry. Part I of this Note provides a historical overview of disparate impact in credit scoring, including the present use of nontraditional data and artificial intelligence by lenders, thus highlighting the importance of the disparate impact doctrine as a tool to combat housing discrimination. Part II offers a legal overview of the FHA pre- and post-Inclusive Communities and grounds the need for disparate impact theory as a recourse for algorithm-based discrimination within the broader context of disparate impact litigation generally. This Part then assesses how HUD’s Proposed Rule contravenes this history of disparate impact litigation and frustrates the purpose of the FHA. Part III offers suggestions for closing the gap in the law in the FHA and addresses potential counterarguments to the proposed solutions.