Veröffentlicht: von

After having participated in its debut last year, it was a special pleasure to visit the second edition of The World Conference on Explainable Artificial Intelligence (xAI2024). The conference was a full immersion into all aspects of explainable AI. The keynote speech by Prof. Fosca Giannotti about hybrid decision-making and the two panel discussions on legal requirements of XAI and XAI in finance broadened the views between detailed poster and presentation sessions.

On July 17th, I presented my work "Exploring the Reliability of SHAP Values in Reinforcement Learning", co-authored by Dataninja colleague Moritz Lange and our supervisors Prof. Laurenz Wiskott and Prof. Wolfgang Konen.
The work I presented is focused on using Shapley values for explainable reinforcement learning in multidimensional observation and action spaces, investigating questions about the reliability of approximation methods and the interpretation of feature importances. While Shapley values are a widely-used tool for machine learning, more work is required for its application to reinforcement learning. To those interested in Shapley values, I recommend to also take a look at the contribution of my Dataninja colleague Patrick Kolpaczki on improving approximation of Shapley values. The conference proceedings are already available as part of Springer's book series "Communications in Computer and Information Science".

Experiencing the conference at Valletta's (Malta) impressive Mediterranean Conference Center, learning about the work of newly met people, and reconnecting with familiar members of the XAI community from last year, has definitely been a highlight of this summer.

 

Veröffentlicht: von

Time flies... It wasn't that long ago (or at least it feels like it) that I wrote a blog post about the first Dataninja Retreat.

Now we already held the closing conference of the Dataninja project. From Tuesday 25th to Thursday 27th we had the pleasure to enjoy three days of science and meetups at Bielefeld University, the “headquarter” of Dataninja. The rich program consisted of keynote talks, poster sessions, and reports from our sibling project “KI starters”.

The (RL)3 project of Moritz Lange and supervisor Prof. Wiskott from Ruhr-University Bochum and myself under the supervision of Prof. Konen from TH Köln, contributed with a short overview of our joint project and a more in-depth presentation of our most recent research in two poster contributions.

Of special interest to our topics were the keynotes by Holger Hoos ("How and Why AI will shape the future"), Henning Wachsmuth ("LLM-based Argument Quality Improvement"), and Sebastian Trimpe ("Trustworthy AI for Physical Machines"). Many thanks to Prof. Barbara Hammer and her team (Dr. Ulrike Kuhl, Özlem Tan) from Bielefeld University for organizing and hosting such a fantastic event!

As usual, it has been a very pleasant occasion to meet our fellow PhD candidates, and we have already made plans to meet up again, because the first ones are already on the home straight.

Veröffentlicht: von

In the last week of February, my RL3 Dataninja colleague Moritz Lange and I had the chance to visit the AAAI conference on AI 2024.

Listening to Yann LeCun in person speak about the challenges of machine learning was inspiring and attending Moritz' presentation of our collaborative work "Interpretable Brain-Inspired Representations Improve RL Performance on Visual Navigation Tasks was a real pleasure.

Besides many interesting talks (one of them by my colleague from the Dataninja research training group Patrick Kolpaczki presenting his work on approximating Shapley values), attending such a big conference was a memorable experience, as was exploring the nature surrounding Vancouver.

Veröffentlicht: von

Our participation in this year's edition of the LOD conference, as previously announced in one of our blog post, proved to be an exceptionally enjoyable experience.

The systematic evaluation of auxiliary tasks in reinforcement learning published in “Improving Reinforcement Learning Efficiency with Auxiliary Tasks in Non-Visual Environments: A Comparison” by first author Moritz Lange (Dataninja-colleague from Ruhr University Bochum) generated significant interest, as did my presentation of our work “Ökolopoly: Case Study on Large Action Spaces in Reinforcement Learning”.
The quality of our collaboration in the Dataninja (RL)3-project was acknowledged: we are excited to share that the comparison of auxiliary tasks for RL won the Best Paper Award!

Set against the picturesque backdrop of the Lake District, the conference provided an ideal setting for the thought-provoking keynote speeches that spanned a wide range of topics, from neuroscience to large language models and their applications. The LOD conference is held in conjunction with the Advanced Course & Symposium on Artificial Intelligence & Neuroscience (ACAIN), a collaboration that fosters mutual respect for advancements in each respective field and promotes the exchange of valuable insights, enhancing the experience and value of both conferences.

Beyond the scientific sessions, the hikes in the hills surrounding Lake Grasmere offered a fantastic opportunity for more in-depth discussions about science and life.

View on Lake District. Taken during a hike with my colleague.

Veröffentlicht: von

From September 6th to September 9th we held our last annual retreat of the Dataninja research training group in Krefeld.
Alongside our invited speaker's talk (Dr. Alessandro Fabris from Max Planck Institute for Security and Privacy) about algorithmic fairness, each PhD candidate from the Dataninja projects presented their progress and current investigations.
In my contribution (Raphael Engelhardt, PhD candidate from TH Köln, Campus Gummersbach, supervised by Prof. Dr. Wolfgang Konen), I spoke about our recent progress on explainability for deep reinforcement learning, published as an open access journal article.

Between talks, enough time was left for valuable discussions and informal exchanges of ideas about new approaches. As usual, meeting the other candidates to talk about the small victories and challenges of our journey towards the PhD was a great pleasure. This year's fun activity required all the combined smartness of students and professors to solve the riddles in the escape rooms.
After the success of these three days, we are especially looking forward to our closing conference next year.
Group picture Dataninja Retreat 2023

Veröffentlicht: von

Lake Windermere on a misty morning (By Mkonikkara, CC BY-SA 3.0, via Wikimedia Commons)

For the second time we (Raphael Engelhardt and Wolfgang Konen) have been given the opportunity to present our work at the Conference on machine Learning, Optimization and Data science (LOD) conference.

To this year's 9th edition, held in Grasmere, England, UK on September 22nd - 26th we have the honor to contribute even two papers stemming from the fruitful collaboration with our Dataninja-colleagues at Ruhr-University Bochum, Prof. Laurenz Wiskott and PhD student Moritz Lange:

  1. Our work entitled “Ökolopoly: Case Study on Large Action Spaces in Reinforcement Learning” describes how we translate the cybernetic board game Ökolopoly into the realm of reinforcement learning and evaluate various methods of handling large observation and action spaces. Large spaces pose a serious challenge to reinforcement learning and we hope our case study will provide valuable approaches to fellow researchers. Additionally we make the environment available to the scientific community with Open AI Gym compatible API.
  2. Improving Reinforcement Learning Efficiency with Auxiliary Tasks in Non-Visual Environments: A Comparison”, under the first authorship of Moritz Lange, is a thorough comparison of auxiliary tasks in a variety of control and robotic tasks, and shows how agents benefit from decoupled representation learning of auxiliary tasks in complex environments.

We are very grateful for this opportunity, look forward to hear other researchers’ advances in machine learning, and interesting discussions about current research topics.

Veröffentlicht: von

We are delighted to announce that our article “Iterative Oblique Decision Trees Deliver Explainable RL Models” was accepted and is now part of the special issue “Advancements in Reinforcement Learning Algorithms” in the MDPI journal Algorithms (impact factor 2.2, CiteScore 3.7) .

Explainability in AI and RL (known as XAI and XRL) becomes increasingly important. In our paper we investigate several possibilities to replace complex “black box” deep reinforcement learning (DRL) models by intrinsically interpretable decision trees (DTs) which require orders of magnitudes fewer parameters. A highlight of our paper is that we find on seven classic control RL problems that the DTs achieve similar reward as the DRL models, sometimes even surpassing the reward of the DRL models. The key to this success is an iterative sampling method that we have developed.

In our work, we present and compare three different methods of collecting samples to train DTs from DRL agents. We test our approaches on seven problems including all classic control environments from Open AI Gym, LunarLander, and the CartPole-SwingUp challenge. Our iterative approach combining exploration of DTs and DRL agent’s predictions, in particular, is able to generate shallow, understandable, oblique DTs that solve the challenges and even outperform the DRL agents they were trained from. Additionally we demonstrate how, given their simpler structure and fewer parameters, DTs allow for inspection and insights, and offer higher degrees of explainability.
To readers interested in explainable AI and understandable reinforcement learning in particular, we recommend to take a look at our open-access article.

Decision surface MountainCarThe figure shows the decision surfaces of DRL models (1st column) and various DT models (2nd and 3rd column) on the environments MountainCar (upper row) and MountainCarContinuous (lower row). The little black dots visualize various episodes showing how MountainCar rolls back and forth in the valley until it finally reaches the goal on the mountain top (x=0.5). The DRL models exhibit more complicated decision surfaces, while the DT models reach the same performance (number in round brackets in the title) with simpler decision surfaces.

Veröffentlicht: von

The second edition of the Dataninja Spring-School was held from 8th to 10th of May 2023 in Bielefeld and as a hybrid event. We had the honor and pleasure to attend talks and tutorials from renowned researchers and aspiring young scientists.
We contributed with an extended abstract and our scientific poster “Finding the Relevant Samples for Decision Trees in Reinforcement Learning” presented during Tuesday’s poster session. The opportunity for fruitful discussions and interactions with fellow PhD students from the Dataninja project was much appreciated!

Veröffentlicht: von

The operation of nuclear power plants (NPPs) is one of the most safety-critical tasks in industry. Prior to using AI methods in this area, it should be thoroughly investigated and evaluated via simulations, whether AI can learn (e.g.´, by reinforcment learning, RL) to power up and shut down a nuclear reactor and how well such an approach meets the safety requirements.  This was exactly the task of Niklas Fabig's master thesis which he conducted under the supervision of Prof. Dr. Wolfgang Konen and PhD-candidate Raphael Engelhardt as part of our (RL)^3-project as part of https://dataninja.nrw/. The works uses as a starting point a Java-based NPP simulation tool from Prof. Dr. Benjamin Weyers, University Trier (screenshot example in image). Niklas Fabig constructed first a Java-Python bridge and then conducted over 2000 RL simulation experiments under various settings. He could show that RL algorithms can learn the power-up procedure yielding high returns, but much more research is needed to reliably meet the safety requirements.

The investigation carried out by Niklas Fabig constitutes very interesting and brand-new research in this field, which has now led to winning the 3rd place in the Steinmüller Engineering Award 2023. His supervisor Wolfgang Konen was deeply impressed by the solid, comprehensive and innovative work done by Niklas Fabig and congratulates him warmly. It should be noted, that the master thesis was conducted in the Corona years 2021 - 2022 and so the supervision had to be fully online. Nevertheless, the result of the work and the motivation of Niklas Fabig was by no means less than if the supervision had taken place in presence.

 

Veröffentlicht: von

View from Certosa di Pontignano

As previously announced, last week I had the pleasure to present our joint work with our partners from Ruhr-University Bochum on explainable reinforcement learning at the 8th Annual Conference on machine Learning, Optimization and Data science (LOD). The presentation sparked interesting questions and lead to inspiring discussions in the enchanting ambiance of the medieval monastery in Tuscany.
As the conference was held in conjunction with the Advanced Course & Symposium on Artificial Intelligence & Neuroscience (ACAIN), we could profit from a very stimulating interdisciplinary environment with talks, tutorials, and posters covering topics reaching from the biology of neuronal development to implementation details of different deep learning frameworks.
We are looking forward to LOD 2023!