Explainable AI (XAI)#
A core research theme of COST Action CA19130, focusing on making AI-driven financial decisions transparent and interpretable.
Overview#
Explainable AI addresses the “black box” problem in machine learning models used for financial services. As AI systems increasingly influence credit decisions, investment recommendations, and risk assessments, understanding their reasoning becomes critical for:
- Regulatory compliance (EU AI Act, GDPR)
- Consumer protection and fair lending
- Model validation and risk management
- Building trust in automated decisions
Key Research Areas#
Model Interpretability
- Feature importance methods (SHAP, LIME)
- Decision tree extraction from neural networks
- Attention mechanisms in financial NLP
Transparency Requirements
- Right to explanation under GDPR
- EU AI Act compliance frameworks
- Industry best practices
Applications
- Credit scoring transparency
- Fraud detection explanations
- Algorithmic trading justifications
Working Group#
This topic was primarily investigated by WG2: Transparent versus Black Box Decision-Support Models.
WG2 Leader: Prof. Petre Lameski (North Macedonia)
Related Publications#
- Explainable AI in Credit Risk Management
- Transparent Machine Learning for Financial Regulation
- Interpretable Models for Consumer Credit
(c) Joerg Osterrieder 2025