Breaking News

New video explores potential for AI bias in mortgage lending

0 0

Read full article

Experts participating in BAD INPUT argue technology could expand access to credit, but also perpetuate discriminatory lending unless it is closely overseen.

In May, we’ll go deep on money and finance for a special theme month, by talking to leaders about where the mortgage market is heading and how technology and business strategies are evolving to suit the needs of buyers now. A prestigious new set of awards, called Best of Finance, debut this month too, celebrating the leaders in this space. And subscribe to Mortgage Brief for weekly updates all year long.

Advances in artificial intelligence could help mortgage lenders evaluate millions of credit-invisible borrowers whose creditworthiness couldn’t previously be assessed, but inherent biases could also perpetuate redlining and other discriminatory lending practices unless closely overseen.

That’s the perspective of experts participating in BAD INPUT, a new video series aimed at raising public awareness of the ramifications of emerging AI technology.

The series, by filmmaker Alice Gu, explores how biases in algorithms and data sets could cause unintended harm in mortgage lending, healthcare and facial recognition technology.

“Educating the public on these risks and their impacts on communities of color is the first step towards advocating for more industry oversight, accountability and creation of more inclusive and equitable products,” said Lili Gangas of the Kapor Foundation, in announcing Tuesday’s release of BAD INPUT.

The Kapor Foundation provided backing for the project and partnered with Consumer Reports to produce the series as part of an Equitable Technology Policy Initiative. Since launching in November, the initiative has provided more than $5 million in funding to over a dozen organizations, including the Algorithmic Justice League and the Distributed AI Research Institute (DAIR) Institute.

“Humans should be involved early to make sure that the data itself isn’t biased,” attorney Jason Downs says in the BAD INPUT segment devoted to mortgage.

Downs — a partner at the Brownstein law firm who serves as lead counsel for clients facing enforcement actions — says humans should also be involved in auditing algorithms periodically.

“So I don’t think that technology is necessarily the solution,” Downs says. “I actually think that human intervention is.”

Kareem Saleh — founder and CEO of Fairplay AI, a “fairness-as-a-service” solution for lenders — tells BAD INPUT that his parents had trouble getting a loan when they immigrated to the U.S. from North Africa in the 1970s.

“You can’t have underwriting for the digital age and fairness tools for the Stone Age,” Saleh says. “Bias detection answers the questions, ‘Is my algorithm fair? And if not, why not?’ Bias remediation answers the questions, ‘Could my algorithm be fairer? What’s the economic impacts to my business of being fairer?”

Saleh says another key question for lenders is, “Did we give our declines, the folks we rejected, a second look?”

The segment also features perspectives from Melissa Koide, CEO and director of nonprofit research center FinRegLab; Michael Akinwumi, who leads the National Fair Housing Alliance’s Tech Equity Initiative; Timnit Gebru, a former Google executive who founded and leads the Distributed AI Research Institute (DAIR); and Vinhcent Le, senior legal counsel at The Greenlining Institute.

The release of BAD INPUT’s mortgage segment is timely, with four federal agencies putting lenders on notice last month that technology marketed as “artificial intelligence” and promising to remove bias from decision making still has “the potential to produce outcomes that result in unlawful discrimination.”

Last year, the Consumer Financial Protection Bureau warned lenders that if they’re unable to explain how they decide to turn borrowers down for loans because the technology they used is too complex, that’s not a defense in cases where they’re accused of discrimination.

The CFPB is also working with federal regulators to draw up rules intended to protect homebuyers and homeowners from algorithmic bias in automated home valuations and appraisals.

Get Inman’s Mortgage Brief Newsletter delivered right to your inbox. A weekly roundup of all the biggest news in the world of mortgages and closings delivered every Wednesday. Click here to subscribe.

Email Matt Carter

GPT’s reaction to this article:

The article discusses the potential for technology, particularly artificial intelligence, to expand access to credit while also perpetuating discriminatory lending practices unless closely overseen. The experts participating in BAD INPUT, a video series exploring the ramifications of emerging AI technology, argue that educating the public on these risks and their impacts on communities of color is the first step towards advocating for more industry oversight, accountability, and creation of more inclusive and equitable products. The article highlights the need for human intervention in auditing algorithms periodically to ensure that the data itself isn’t biased. The release of BAD INPUT’s mortgage segment is timely, with four federal agencies putting lenders on notice last month that technology marketed as “artificial intelligence” and promising to remove bias from decision making still has “the potential to produce outcomes that result in unlawful discrimination.”

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %