Abstract
Artificial Intelligence (AI) algorithms are playing an increasingly significant role in automated lending decision-making systems, offering promising solutions to reduce the cost of credit and increase financial accuracy and inclusion. However, they often inherit and amplify structural biases embedded in historical data. Traditional bias mitigation approaches that focus solely on data cleaning fail to address deeper systemic disparities, particularly those rooted in race and income inequalities. This paper presents an equity-focused framework that extends traditional data preprocessing by implementing structural repair mechanisms in the training data. Using structured analysis of the Home Mortgage Disclosure Act (HMDA) datasets from 2014, 2017, and 2020, the study investigates potential racial and gender disparities in the data and the loan approval outcomes. A novel bias mitigation method combining applicant race and income into composite tiers is introduced to treat imbalances within the dataset. Implementing this method led to significant improvements in fairness metrics by 1) reducing data distribution disparity, 2) narrowing loan approval disparities across demographic groups, and 3) reducing the Equal Opportunity Metric from -0.75 to 0.08 for race. This research highlights the necessity of structural data repair as a prerequisite for ensuring fair outcomes in automated decision-making systems.
Presenters
Muna AbdelrahimData Analyst/Program Manager, College of Science Engineering & Technology, Jackson State University, Mississippi, United States
Details
Presentation Type
Paper Presentation in a Themed Session
Theme
KEYWORDS
AI, Algorithmic Fairness, Bias Mitigation, Structural Data Repair, Automated Lending
