Interpretable AI in Credit Scoring: A Comparative Survey of SHAP, LIME, and Hybrid Approaches
Sai Prashanth Pathi , Independent Researcher, USA Jahnavi Swetha Pothineni , Independent Researcher, USAAbstract
Explainable AI (XAI) is critical in domains like credit scoring where model decisions must be transparent and accountable. This survey paper compares three local explanation techniques—SHAP, LIME, and ensemble Hybrid approach that integrates both. We evaluate these methods on consistency, variability, and suitability for regulatory environments. Emphasis is placed on use in credit risk modeling, with insights drawn from both literature and practical evaluation.
Keywords
Explainable AI (XAI), SHAP, LIME, Local interpretability, Hybrid model explanations, Credit Risk Modeling
References
M. T. Ribeiro, S. Singh, and C. Guestrin, “Why should I trust you? Explaining the predictions of any classifier,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1135–1144.
S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” in Advances in Neural Information Processing Systems, vol. 30, 2017.
D. Slack, S. Hilgard, E. Jia, S. Singh, and R. Sohoni, “Fooling LIME and SHAP: Adversarial attacks on post hoc explanation methods,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (AIES), 2020, pp. 180–186.
D. Alvarez-Melis and T. S. Jaakkola, “On the robustness of interpretability methods,” arXiv preprint arXiv:1806.08049, 2018.
S. Krishna and H. Lakkaraju, “Disagreement among local explanation methods: A comparative study on real-world datasets,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2022, pp. 678–689.
S. Carta et al., “Explainable AI in finance: A survey,” arXiv preprint arXiv:2102.01130, 2021.
G. Vilone and L. Longo, “Notions of explainability and evaluation approaches for explainable artificial intelligence,” Information Fusion, vol. 76, pp. 89–106, 2021.
U. Bhatt et al., “Explainable machine learning in deployment,” arXiv preprint arXiv:2011.01962, 2020.
R. K. Mothilal, A. Sharma, and C. Tan, “Explaining ma- chine learning classifiers through diverse counterfactual explanations,” in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAccT), 2020, pp. 607–617.
Lending Club Loan Data, Kaggle Dataset, https://www.kaggle.com/datasets/adarshsng/ lending-club-loan-data-csv?resource=download&select= LCDataDictionary.xlsx
Download and View Statistics
Copyright License

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors retain the copyright of their manuscripts, and all Open Access articles are disseminated under the terms of the Creative Commons Attribution License 4.0 (CC-BY), which licenses unrestricted use, distribution, and reproduction in any medium, provided that the original work is appropriately cited. The use of general descriptive names, trade names, trademarks, and so forth in this publication, even if not specifically identified, does not imply that these names are not protected by the relevant laws and regulations.


Engineering and Technology
| Open Access |
DOI: