Veröffentlicht: von

After having participated in its debut last year, it was a special pleasure to visit the second edition of The World Conference on Explainable Artificial Intelligence (xAI2024). The conference was a full immersion into all aspects of explainable AI. The keynote speech by Prof. Fosca Giannotti about hybrid decision-making and the two panel discussions on legal requirements of XAI and XAI in finance broadened the views between detailed poster and presentation sessions.

On July 17th, I presented my work "Exploring the Reliability of SHAP Values in Reinforcement Learning", co-authored by Dataninja colleague Moritz Lange and our supervisors Prof. Laurenz Wiskott and Prof. Wolfgang Konen.
The work I presented is focused on using Shapley values for explainable reinforcement learning in multidimensional observation and action spaces, investigating questions about the reliability of approximation methods and the interpretation of feature importances. While Shapley values are a widely-used tool for machine learning, more work is required for its application to reinforcement learning. To those interested in Shapley values, I recommend to also take a look at the contribution of my Dataninja colleague Patrick Kolpaczki on improving approximation of Shapley values. The conference proceedings are already available as part of Springer's book series "Communications in Computer and Information Science".

Experiencing the conference at Valletta's (Malta) impressive Mediterranean Conference Center, learning about the work of newly met people, and reconnecting with familiar members of the XAI community from last year, has definitely been a highlight of this summer.