Explainable AI (XAI)#

A core research theme of COST Action CA19130, focusing on making AI-driven financial decisions transparent and interpretable.

Overview#

Explainable AI addresses the “black box” problem in machine learning models used for financial services. As AI systems increasingly influence credit decisions, investment recommendations, and risk assessments, understanding their reasoning becomes critical for:

  • Regulatory compliance (EU AI Act, GDPR)
  • Consumer protection and fair lending
  • Model validation and risk management
  • Building trust in automated decisions

Key Research Areas#

  1. Model Interpretability

    • Feature importance methods (SHAP, LIME)
    • Decision tree extraction from neural networks
    • Attention mechanisms in financial NLP
  2. Transparency Requirements

    • Right to explanation under GDPR
    • EU AI Act compliance frameworks
    • Industry best practices
  3. Applications

    • Credit scoring transparency
    • Fraud detection explanations
    • Algorithmic trading justifications

Working Group#

This topic was primarily investigated by WG2: Transparent versus Black Box Decision-Support Models.

WG2 Leader: Prof. Petre Lameski (North Macedonia)

  • Explainable AI in Credit Risk Management
  • Transparent Machine Learning for Financial Regulation
  • Interpretable Models for Consumer Credit

(c) Joerg Osterrieder 2025