Fairness-Accuracy Trade-offs in AI Credit Scoring: A Comparative Evaluation of Reweighting and Resampling Strategies Under Multiple Fairness Constraints
DOI:
https://doi.org/10.63575/CIA.2026.40110Keywords:
algorithmic fairness, credit scoring, pre-processing debiasing, fairness-accuracy trade-offAbstract
The proliferation of artificial intelligence in financial risk assessment has introduced significant concerns regarding algorithmic fairness, particularly in credit scoring systems where biased predictions can disproportionately affect protected demographic groups. This study presents a comparative evaluation of two predominant pre-processing debiasing strategies—reweighting and resampling—applied to AI-based credit scoring algorithms. Using two publicly available benchmark datasets (the UCI German Credit dataset and the UCI Default of Credit Card Clients dataset), we systematically assess the accuracy-fairness trade-offs under three widely adopted fairness constraints: statistical parity, equal opportunity, and equalized odds. Experimental results across three baseline classifiers (Logistic Regression, Random Forest, and XGBoost) indicate that reweighting achieves a more favorable balance between predictive accuracy and fairness when evaluated under equal opportunity and equalized odds constraints, while resampling demonstrates stronger performance in reducing statistical parity differences. The magnitude of accuracy degradation varies substantially depending on the choice of fairness constraint, with equalized odds imposing the greatest accuracy cost across both datasets. These findings provide evidence-based guidance for financial institutions seeking to implement fairness-aware credit scoring systems and suggest that the selection of debiasing strategy should be contingent upon the specific fairness objective prioritized by regulatory and institutional requirements.


