Loans

Technology upgrade planned to keep AI-based lending fair


as Supervisors have repeatedly warned banks and fintech companies Their AI models must be transparent, explainable, fair and unbiased, especially when making lending decisions, and banks and fintech companies are taking extra steps to prove that their algorithms meet all these requirements .

A prime example is Upgrade, a San Francisco-based challenger bank that offers mobile banking, personal loans, hybrid debit and credit cards, credit builder cards, auto loans and home improvement loans to 5 million consumers. Upgrade is working with an “embedded fair” provider called fair play Backtest and monitor their models in real-time to ensure the decisions supported by their models are free of bias. FairPlay has partnered with 25 banks and fintech companies, including Varo Bank, Figure and octane loan.

“What [the partnership with FairPlay] Renaud Laplanche, founder and CEO of Upgrade, said in an interview: “What we have achieved is ensuring that we are fair, compliant and make appropriate credit decisions that do not have a disparate impact on any protected class.” Over time, Upgrade plans to apply FairPlay to all of its credit products.

The banking, fintech and banking-as-a-service ecosystem has come under a lot of regulatory scrutiny recently. Implementation issues already fair lending Regulators are concerned that banks and fintech companies are using alternative credit data and advanced artificial intelligence models in ways that are difficult to understand and interpret, and that may introduce bias. consent orderRegulators require banks to monitor the fairness of their lending models.

These concerns are not new. Financial companies have been using artificial intelligence in their lending models for years, and regulators have made clear from the beginning that they must comply with all applicable laws, including the Equal Credit Opportunity Act and the Fair Housing Act, which prohibit the use of artificial intelligence on the basis of characteristics such as race. discrimination.

But proving that AI-based lending models don’t discriminate is a newer field.

Kareem Saleh, founder and CEO of FairPlay, said in an interview: “There is an emerging consensus that if you want to use artificial intelligence and big data, you have to take seriously the biases inherent in these systems.” Rigorous investigation of these biases, If you identify a problem, you must work seriously and purposefully to solve it. “

Saleh said the upgrade has shown great leadership, both for itself and for the industry as a whole, in strengthening compliance technology in the field.

Upgrade uses a machine learning technique called gradient boosting to make loan decisions. (Behind the scenes, the company’s personal loans and auto refinance loans are made by partners Cross River Bank and Blue Ridge Bank. Home improvement loans and personal lines of credit are also made by Cross River Bank.) River Bank, the bank that issues the upgrade card . ) about 250 people purchased upgrade loans.

Banks that purchase loans from Upgrade and other fintech companies will look for evidence of compliance with the Equal Credit Opportunity Act and other laws governing lending. Beyond that, Upgrade has its own compliance requirements, as do its banking partners and the banks that purchase its loans. FairPlay’s API will keep tabs on all of this. They will backtest and monitor their models for any signs of possible adverse effects on any group.

One aspect of the software that interests LaPrange is its real-time monitoring capabilities.

“That’s where it becomes more efficient and easier to use, rather than periodically auditing and sending the data to a third party and then getting the results back weeks or months later,” LaPrange said. “Here, you Being able to have this kind of continuous service “it’s always running, gets signals very quickly, helps us make adjustments very quickly. We like that it’s embedded and not a batch process. “

FairPlay’s software is most commonly used to backtest loan models. It will run a model on a loan application from two years ago to see how the model would have performed if it had been in production at that time.

“Then you can make some reasonable estimates of the model’s outcomes for different groups,” Saleh said.

If backtesting reveals problems, such as more loans being made to white men than to women and minorities, the software can be used to determine which variables led to different outcomes for different groups.

Once these variables are identified, the question becomes: “Do I need to rely on these variables as much as I do now?” Saleh said. “Are there some other variables that might be similarly predictive but drive less of a differential effect? ​​Only if you All of these questions can be asked when you take the first step of testing the model and asking what are the outcomes for all of these groups?”

For example, a woman who leaves the workforce for several years to raise children would have an unstable income, which would seem to be a big red flag for a loan underwriting model. But information about women’s credit performance can be used to adjust the weights of variables, Saleh said, making the model more sensitive to women as a class.

A black man who grew up in a neighborhood without bank branches, and therefore mostly used check cashing machines, is less likely to have a high FICO score and may not have a bank account. In that case, a model might be tweaked to reduce the impact of credit scores and adjust for the impact of continued employment, Saleh said.

Saleh said the adjustment could “allow the model to capture people who were previously insensitive because of overreliance on certain information.”

Saleh said FairPlay’s backtests can be run on a variety of underwriting models, from linear and logistic regression to advanced machine learning models.

“Today, AI models are where all the action is,” Saleh said. “More advanced AI models are harder to interpret. So it’s harder to understand what variables are driving their decisions, and they may consume more cluttered information. , missing or wrong. This makes fairness analysis more nuanced than dealing with a world of relatively interpretable models and data that essentially exists and is correct.”

When FairPlay monitors the results of a model, it can be used to detect unfair behavior and recommend changes or corrections.

“If equity starts to decline, we try to understand why,” Saleh said. “In a dynamically changing economic environment, how do we ensure underwriting remains equitable? These are questions that have never really been asked or addressed before.” .

FairPlay recently started offering real-time monitoring. Because technological and economic conditions have been changing rapidly, “intermittent testing is no longer sufficient,” Saleh said.

Patrick Hall, a professor at George Washington University who has participated in the NIST artificial intelligence risk management framework, said that technology like FairPlay is important, and he believes that FairPlay’s software is a reliable tool.

“Of course people need good tools,” Hall said. “But they have to fit the process and the culture to really be effective.”

Good modeling culture and processes include ensuring there is some diversity in the team of programmers.

“More diverse teams have fewer blind spots,” Hall said. That means not just demographic diversity, but having people with a wide range of skills, including economists, statisticians and psychometricians.

Good processes include transparency, accountability and documentation.

“It’s just old-school governance,” Hall said. “If you train this model, you have to write a document on it. You have to sign that document, and if the system doesn’t work as expected, you might actually experience the consequences.”





Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button