We have all heard of redlining, but what is digital redlining and how can we look for it in our financial institutions?
This article is going to explore digital redlining and provide a few questions we can ask to uncover digital redlining risk in our organizations.
Redlining & Fair Lending
We all know that fair lending is and will always be a hot topic with regulators. And we also know that redlining has been a continually growing focus of regulators over the last decade. In fact, the Department of Justice (DOJ) recently announced that it is spearheading its most aggressive and coordinated enforcement effort to address redlining: its new Combatting Redlining Initiative.
Redlining, of course, happens when financial institutions and lenders illegally avoid providing services to certain communities that have concentrations of minorities and other protected classes. Redlining is prohibited by the Fair Housing Act as well as the Equal Credit Opportunity Act.
Now, traditionally, regulators have focused on redlining as it applies to branch locations, advertising methods, and geographic concerns of avoiding certain communities of color.
But there is a new element to redlining that the regulators are starting to talk about: digital redlining.
Digital Redlining & The CFPB
A while back, we released a Compliance Clip (video) that discusses digital redlining in quite a bit of detail. In the video, we explain how digital redlining often relates to disparate impact where banks are setting hard policies or using computer systems that appear to be fair, but the result, and it's not always intended, but the result is that discrimination occurs. This is a real concern because more and more financial institutions are trying to become efficient through technology. When that occurs, sometimes we're using tools that could potentially get us in a problem.
In a recent speech, CFPB director Rohit Chopra discussed digital redlining and the concerns regulators have with digital and algorithmic redlining, which can result from automated underwriting systems. Specifically, Mr. Chopra warned of “digital and algorithmic redlining” and made the following comments relating to digital redlining:
“If we allow racist and discriminatory policies to persist, we will not live up to our country’s ideals. We need a fair housing market that is free from old forms of redlining, as well as new digital and algorithmic redlining….
Technology companies and financial institutions are amassing massive amounts of data and using it to make more and more decisions about our lives, including loan underwriting and advertising.
While machines crunching numbers might seem capable of taking human bias out of the equation, that’s not what is happening.
Findings from academic studies and news reporting raise serious questions about algorithmic bias. For example, a statistical analysis of 2 million mortgage applications found that Black families were 80% more likely to be denied by an algorithm when compared to white families with similar financial and credit backgrounds. The response of mortgage companies has been that researchers do not have all the data that feeds into their algorithms or full knowledge of the algorithms. But their defense illuminates the problem: the algorithms are black boxes behind brick walls. When consumers and regulators do not know how decisions are made by the algorithms, consumers are unable to participate in a fair and competitive market free from bias.
Algorithms can help remove bias, but black box underwriting algorithms are not creating a more equal playing field and only exacerbate the biases fed into them.
Given what we have seen in other contexts, the speed with which banks and lenders are turning lending and advertising decisions over to algorithms is concerning. Too many families were victimized by the robo-signing scandals from the last crisis, and we must not allow robo-discrimination to proliferate in a new crisis.
We should never assume that algorithms will be free of bias. If we want to move toward a society where each of us has equal opportunities, we need to investigate whether discriminatory black box models are undermining that goal.
I am pleased that the CFPB will continue to contribute to the all-of-government mission to root out all forms of redlining, including algorithmic redlining.”
The full Speech by CFPB Director Chopra can be found here.
Tips to Uncover Digital Redlining
As digital and algorithmic redlining is becoming a focus of the regulators, it will be important for each financial institution to determine their risk associated with this new form of discrimination. Specifically, your financial institution can answer the following questions:
Do you utilize any systems (e.g. automated underwriting) that could have the potential for digital redlining?
Do you have a regular fair lending review that works to identify redlining risk, and does that review include looking for digital or algorithmic redlining?
If you use an automated underwriting system, is it a "black box behind a brick wall," or do you know how the system makes decisions?
What is your organization’s risk for digital redlining and are there any steps that can be taken to reduce that risk?
What else should you be considering in relation to digital or algorithmic redlining?
Conclusion on Digital Redlining
The reality is that digital redlining is becoming a growing concern of the regulators, and therefore, financial institutions should evaluate their risks and determine if any action to reduce the risk of digital or algorithmic redlining is needed.