ECML PKDD International Workshop on

eXplainable Knowledge Discovery in Data Mining

September 13, 2024

Call for Papers

In the past decade, machine learning based decision systems have been widely used in a wide range of application domains, like for example credit score, insurance risk, and health monitoring, in which accuracy is of the utmost importance. Although the support of these systems has a big potential to improve the decision in different fields, their use may present ethical and legal risks, such as codifying biases, jeopardizing transparency and privacy, reducing accountability. Unfortunately, these risks arise in different applications and they are made even more serious and subtly by the opacity of recent decision support systems, which often are complex and their internal logic is usually inaccessible to humans.

Nowadays most of the Artificial Intelligence (AI) systems are based on Machine Learning algorithms. The relevance and need of ethics in AI is supported and highlighted by various initiatives arising from the researches to provide recommendations and guidelines in the direction of making AI-based decision systems explainable and compliant with legal and ethical issues. These include the EU's GDPR regulation which introduces, to some extent, a right for all individuals to obtain "meaningful explanations of the logic involved" when automated decision making takes place, the "ACM Statement on Algorithmic Transparency and Accountability", the Informatics Europe's "European Recommendations on Machine-Learned Automated Decision Making" and "The ethics guidelines for trustworthy AI" provided by the EU High-Level Expert Group on AI.

The challenge to design and develop trustworthy AI-based decision systems is still open and requires a joint effort across technical, legal, sociological and ethical domains.

The purpose of XKDD, eXplaining Knowledge Discovery in Data Mining, is to encourage principled research that will lead to the advancement of explainable, transparent, ethical and fair data mining and machine learning. Also, this year the workshop will seek submissions addressing uncovered important issues into specific fields related to eXplainable AI (XAI) such as XAI for a more Social and Responsible AI, XAI as a tool to align AI with human values, XAI for Outlier and Anomaly Detection, quantitative and qualitative evaluation of XAI approaches, and XAI case studies. The workshop will seek top-quality submissions related to ethical, fair, explainable and transparent data mining and machine learning approaches. Papers should present research results in any of the topics of interest for the workshop as well as tools and promising preliminary ideas. XKDD asks for contributions from researchers, academia and industries, working on topics addressing these challenges primarily from a technical point of view, but also from a legal, ethical or sociological perspective.

Topics of interest include, but are not limited to:

Submissions with a focus on uncovered important issues related to XAI are particularly welcome, e.g. XAI for fairness checking approaches, XAI for privacy-preserving systems, XAI for federated learning, XAI for time series and graph based approaches, XAI for visualization, XAI in human-machine interaction, benchmarking of XAI methods, and XAI case studies.

The call for paper can be dowloaded here.

Submission

Electronic submissions will be handled via CMT.

Papers must be written in English and formatted according to the Springer Lecture Notes in Computer Science (LNCS) guidelines following the style of the main conference (format).

The maximum length of either research or position papers is 16 pages references included. Overlength papers will be rejected without review (papers with smaller page margins and font sizes than specified in the author instructions and set in the style files will also be treated as overlength).

Authors who submit their work to XKDD 2024 commit themselves to present their paper at the workshop in case of acceptance. XKDD 2024 considers the author list submitted with the paper as final. No additions or deletions to this list may be made after paper submission, either during the review period, or in case of acceptance, at the final camera ready stage.

Condition for inclusion in the post-proceedings is that at least one of the co-authors has presented the paper at the workshop. Pre-proceedings will be available online before the workshop.

All accepted papers will be published as post-proceedings in LNCSI and included in the series name Lecture Notes in Computer Science.

All papers for XKDD 2024 must be submitted by using the on-line submission system at CMT.

Important Dates

  • Paper Submission deadline: June 21 June 28, 2024
  • Accept/Reject Notification: July 21 July 28, 2024
  • Camera-ready deadline: July 31 August 7, 2024
  • Workshop: September 13, 2024

Organization

Program Chairs

Invited Speakers

Przemyslaw Biecek

Professor at Warsaw University of Technology, Warsaw, Poland

AI is broken and we need XAI to fix it

Why do we explain ML models? Many XAI techniques respond to the needs of users. They increase trust, confidence in the use of AI tools, and enable effective human-model interaction. During this talk, however, I will focus on the opportunities that XAI opens up for model developers. I will present examples of situations where XAI techniques help detect model weaknesses, fix errors, and even extract new knowledge from models. I will introduce the RED-XAI perspective and encourage attending to create new methods for the area.
Przemyslaw Biecek works as an associate professor at MiNI, Warsaw University of Technology, and as an assistant professor at MIM, University of Warsaw. He completed his studies in mathematical statistics at PAM, WUST, and in software engineering at CSM, WUST. His personal mission is to enhance human capabilities by providing support through access to data-driven and knowledge-based predictions. He pursues this mission by developing methods and tools for responsible machine learning, trustworthy artificial intelligence, and reliable software engineering. He has a keen interest in predictive modeling of large and complex data, data visualization, and model interpretability. His main research project is DrWhy.AI, which focuses on tools and methods for the exploration, explanation, and debugging of predictive models. His other research activities are primarily centered on applications, most notably high-throughput genetic profiling in oncology. He also shows a strong interest in evidence-based education, evidence-based medicine, general machine learning, and statistical software engineering, while being a staunch advocate for data literacy.

Panagiotis Papapetrou

Professor at the Stockholm University, Stockholm, Sweden

The need of XAI in medical applications

The integration of AI and machine learning in healthcare has opened new opportunities for more effective diagnosis, treatment, and patient management. However, due to the complexity of machine learning models alongside the inherent multimodality of medical data, the need for transparency and interpretability—particularly in high-stakes medical environments—grows ever more critical. In this talk I will highlight the need for explainability during all stages of model building and model integration in AI-based decision support systems in healthcare. Examples on explainable machine learning methods for sequential and temporal data will be outlined and discussed. This presentation will emphasize the importance of ensuring that medical professionals and patients alike can understand and trust the outputs of AI-driven tools, ultimately leading to improved healthcare outcomes.
Panagiotis Papapetrou is a Professor and Vice Head of the Department of Computer and Systems Sciences at Stockholm University, Sweden. His research interests are algorithmic data mining on large and complex data. Specifically, he is interested in time series classification and forecasting, interpretable and explainable machine learning, searching and mining large and complex sequences, and learning from electronic health records.

Program Committee

  • Leila Amgoud, CNRS, France
  • Umang Bhatt, University of Cambridge, UK
  • Miguel Couceiro, INFRIA, France
  • Menna El-Assady, AI Center of ETH, Switzerland
  • Josep Domingo-Ferrer, Universitat Rovira i Virgili, Spain
  • Françoise Fessant, Orange Labs, France
  • Elisa Fromont, University of Rennes, France
  • Salvatore Greco, Politecnico di Torino, Italy
  • Andreas Holzinger, Medical University of Graz, Austria
  • Thibault Laugel, AXA, France
  • Paulo Lisboa, Liverpool John Moores University, UK
  • Marcin Luckner, Warsaw University of Technology, Poland
  • Jurek Leonhardt, Leibniz University Hannover
  • Amedeo Napoli, CNRS, France
  • John Mollas, Aristotle University of Thessaloniki, Greece
  • Ramaravind Kommiya Mothilal, Everwell Health Solutions, India
  • Enea Parimbelli, University of Pavia, Italy
  • Roberto Prevete, University of Napoli, Italy
  • Antonio Rago, Imperial College London, UK
  • Pasquadibisceglie Vincenzo, Università degli studi di Bari Aldo Moro, Italy
  • Jan Ramon, INFRIA, France
  • Xavier Renard, AXA, France
  • Daniele Regoli, Instesa San Paolo, Italy
  • Mahtab Sarvmaili, Dalhousie University, Canada
  • Christin Seifert, University of Duisburg-Essen, Germany
  • Udo Schlegel, Konstanz University, Germany
  • Mattia Setzu, University of Pisa, Italy
  • Fabrizio Silvestri, Università di Roma, Italy
  • Dominik Slezak, University of Warsaw, Poland
  • Stefano Teso, Università di Trento, Italy
  • Cagatay Turkay, University of Warwick, UK
  • Genoveva Vargas-Solar, CNRS, LIRIS, France
  • Marco Virgolin, Chalmers University of Technology, Netherlands
  • Martin Jullum, Norwegian Computing Center, Norway
  • Guangyi Zhang, KTH Royal Institute of Technology, Sweden
  • Albrecht Zimmermann, Université de Caen, France

Program

Welcome, General Overview, Supporting projects presentation

Morning

First Keynote Talk Przemyslaw Biecek

AI is broken and we need XAI to fix it.

Research Paper presentation (20 min + 5 min Q&A)


A Socioinformatic Approach to xAI

Alexander Wilhelm, Katharina A Zweig

Coffee Break

Research Paper presentation (20 min + 5 min Q&A)


An Empirical Study of Feature Dynamics During Fine-tuning

Hamed Behzadi-Khormouji, Lena De Roeck, Jose Oramas


KDLIME: KNN-Kernel Density-Based Perturbation for Local Interpretability

Yu-Hsin Hung, Chia-Yen Lee


Enhancing interpretability of rule-based classifiers through feature graphs

Christel Sirocchi, Damiano Verda

Lunch Break

Afternoon

Second Keynote Talk Panagiotis Papapetrou

The need of XAI in medical applications

Research Paper presentation (20 min + 5 min Q&A)


Explainable Malware Detection with Tailored Logic Explained Networks

Peter Anthony, Francesco Giannini, Michelangelo Diligenti, Marco Gori, Martin Homola, Štefan Balogh, Ján Mojžiš

Coffee Break

Research Paper presentation (20 min + 5 min Q&A)


Interactive Counterfactual Generation for Univariate Time Series

Udo M Schlegel, Julius Rauscher, Daniel Keim


Generating Explanatory Rules for Temporal Data Using Prior Knowledge

Eleonora Cappuccio, Bahavathy Kathirgamanathan, Salvatore Rinzivillo, Gennadiy Andriyenko, Nathaliya Andriyenko


CRITS: Convolutional Rectifier for Interpretable Time Series Classification

Alejandro Kuratomi, Zed Lee, Guilherme Dinis Junior, Tony Lindgren, Diego García

Concluding Remarks

Venue

The event will take place at the ECML-PKDD 2024 Conference at the Radisson Blu Hotel, Room Delta .


Additional information about the location can be found at
the main conference web page: ECML-PKDD 2024

Partners

This workshop is partially supported by the European Community H2020 Program under research and innovation programme, grant agreement 834756 XAI, science and technology for the explanation of ai decision making.

This workshop is partially supported by the European Community H2020 Program under the funding scheme FET Flagship Project Proposal, grant agreement 952026 HumanE-AI-Net.

This workshop is partially supported by the European Community H2020 Program under the funding scheme INFRAIA-2019-1: Research Infrastructures, grant agreement 871042 SoBigData++.

This workshop is partially supported by the European Community H2020 Program under research and innovation programme, grant agreement 952215 TAILOR.

This workshop is partially supported by the European Community H2020 Program under research and innovation programme, SAI. CHIST-ERA-19-XAI-010, by MUR (N. not yet available), FWF (N. I 5205), EPSRC (N. EP/V055712/1), NCN (N. 2020/02/Y/ST6/00064), ETAg (N. SLTAT21096), BNSF (N. KP-06-AOO2/5). SAI.

This workshop is partially supported by TANGO. TANGO is a €7M EU-funded Horizon Europe project that aims to develop the theoretical foundations and the computational framework for synergistic human-machine decision making. The 4-year project will pave the way for the next generation of human-centric AI systems. TANGO.

This workshop is partially supported by the European Community NextGenerationEU programme under the funding schemes PNRR-PE-AI FAIR (Future Artificial Intelligence Research). FAIR.

The XKDD 2023 event was organised as part of the SoBigData.it project (Prot. IR0000013 - Call n. 3264 of 12/28/2021) initiatives aimed at training new users and communities in the usage of the research infrastructure (SoBigData.eu). “SoBigData.it receives funding from European Union – NextGenerationEU – National Recovery and Resilience Plan (Piano Nazionale di Ripresa e Resilienza, PNRR) – Project: “SoBigData.it – Strengthening the Italian RI for Social Mining and Big Data Analytics” – Prot. IR0000013 – Avviso n. 3264 del 28/12/2021.” SoBigData.it.

Contacts

All inquires should be sent to

francesca.naretto@di.unipi.it

francesco.spinnato@di.unipi.it