The use of artificial intelligence (AI) is poised to grow amid higher origination costs and greater competition, but without correcting the underlying causes of bias in data, AI models can embed racial equity inequality on a larger scale, a recent report from the Urban Institute concluded.

“It must first, however, overcome the biases and inequities already embedded into the data it analyzes. Policymakers and the mortgage industry must reckon with historical and present-day barriers that lock would-be homebuyers of color out of the market altogether,” according to the report published on Monday. 

Nearly 50 interviews from staff members in the federal government, financial technology companies, mortgage lenders and consumer advocates found that the ability of AI to improve racial equity can be undermined by the data used to train the algorithm, not just by the algorithm itself.

Interviews revealed that some of the most promising AI-based underwriting models are also the most controversial – such as explicitly incorporating race up front in underwriting models.

While AI is being used in marketing, underwriting, property valuations and fraud detection, AI is only beginning to be incorporated into servicing, according to interview findings.

In terms of adopters, government-sponsored enterprises (GSEs), large mortgage lenders and fintech firms have used AI. However, data from the interviews showed adoption rates appear lower among smaller and mission-oriented lenders, such as minority depository institutions (MDIs) and community development financial institutions (CDFIs).

Intentional design for equity, carefully studied pilot programs and regulatory guidance are three areas the report recommends policymakers, regulators and developers to focus on to ensure AI equitably expands access to mortgage services.

In creating an intentional design for equitable AI, the report emphasized much thought is needed on the training data being used and what biases need to be accounted for; how to make the AI or machine learning tool more transparent to users; and whether the human process being replaced by AI is fair or needs improvement.

The Urban Institute suggested the federal government — Ginnie Mae and the GSEs, in particular — can also use pilot programs to determine the effectiveness of AI tools and address equity concerns at a smaller scale before the industry implements these tools more broadly.

A potential new pilot could be used by the GSEs or the FHA to test the use of AI in mortgage servicing.

“An AI-based algorithm that projects the likelihood of borrower delinquency and identifies the best risk management process could be valuable. The GSEs or the FHA could deploy the pilot and test it against current processes to determine delinquency and resulting management,” according to the report. 

Interviewees pointed to the lack of clear regulatory standards governing the use of AI across the mortgage industry.

“Anything that creates more certainty and safety from the regulatory community would help both industry and consumer stakeholders,” according to an interviewee.

The report noted the need for federal regulators to protect consumers, particularly the most vulnerable, as the government, lenders and third-party vendors may all have differing incentives for AI use. 

The Consumer Financial Protection Bureau (CFPB) also has a role to play to clearly delineate “which data elements consumers have a right to access, what the standards are for private companies accessing and transferring data, and how several federal consumer finance laws should be applied to consumer data transfers,” according to the report. 

Changes will not occur automatically and the federal government must lead the way in ensuring that AI produces both equitable outcomes, the report read.

“A strong role for the federal government can overcome the innovation chasm, provide greater clarity on the price of innovation and more easily expand the most promising AI-based services that optimize both efficiency and equity.”



Source link