In the last week of February, my RL3 Dataninja colleague Moritz Lange and I had the chance to visit the AAAI conference on AI 2024.

Listening to Yann LeCun in person speak about the challenges of machine learning was inspiring and attending Moritz' presentation of our collaborative work "Interpretable Brain-Inspired Representations Improve RL Performance on Visual Navigation Tasks was a real pleasure.

Besides many interesting talks (one of them by my colleague from the Dataninja research training group Patrick Kolpaczki presenting his work on approximating Shapley values), attending such a big conference was a memorable experience, as was exploring the nature surrounding Vancouver.

Our participation in this year's edition of the LOD conference, as previously announced in one of our blog post, proved to be an exceptionally enjoyable experience.

The systematic evaluation of auxiliary tasks in reinforcement learning published in “Improving Reinforcement Learning Efficiency with Auxiliary Tasks in Non-Visual Environments: A Comparison” by first author Moritz Lange (Dataninja-colleague from Ruhr University Bochum) generated significant interest, as did my presentation of our work “Ökolopoly: Case Study on Large Action Spaces in Reinforcement Learning”.
The quality of our collaboration in the Dataninja (RL)3-project was acknowledged: we are excited to share that the comparison of auxiliary tasks for RL won the Best Paper Award!

Set against the picturesque backdrop of the Lake District, the conference provided an ideal setting for the thought-provoking keynote speeches that spanned a wide range of topics, from neuroscience to large language models and their applications. The LOD conference is held in conjunction with the Advanced Course & Symposium on Artificial Intelligence & Neuroscience (ACAIN), a collaboration that fosters mutual respect for advancements in each respective field and promotes the exchange of valuable insights, enhancing the experience and value of both conferences.

Beyond the scientific sessions, the hikes in the hills surrounding Lake Grasmere offered a fantastic opportunity for more in-depth discussions about science and life.

View on Lake District. Taken during a hike with my colleague.

From September 6th to September 9th we held our last annual retreat of the Dataninja research training group in Krefeld.
Alongside our invited speaker's talk (Dr. Alessandro Fabris from Max Planck Institute for Security and Privacy) about algorithmic fairness, each PhD candidate from the Dataninja projects presented their progress and current investigations.
In my contribution (Raphael Engelhardt, PhD candidate from TH Köln, Campus Gummersbach, supervised by Prof. Dr. Wolfgang Konen), I spoke about our recent progress on explainability for deep reinforcement learning, published as an open access journal article.

Between talks, enough time was left for valuable discussions and informal exchanges of ideas about new approaches. As usual, meeting the other candidates to talk about the small victories and challenges of our journey towards the PhD was a great pleasure. This year's fun activity required all the combined smartness of students and professors to solve the riddles in the escape rooms.
After the success of these three days, we are especially looking forward to our closing conference next year.
Group picture Dataninja Retreat 2023

Lake Windermere on a misty morning (By Mkonikkara, CC BY-SA 3.0, via Wikimedia Commons)

For the second time we (Raphael Engelhardt and Wolfgang Konen) have been given the opportunity to present our work at the Conference on machine Learning, Optimization and Data science (LOD) conference.

To this year's 9th edition, held in Grasmere, England, UK on September 22nd - 26th we have the honor to contribute even two papers stemming from the fruitful collaboration with our Dataninja-colleagues at Ruhr-University Bochum, Prof. Laurenz Wiskott and PhD student Moritz Lange:

  1. Our work entitled “Ökolopoly: Case Study on Large Action Spaces in Reinforcement Learning” describes how we translate the cybernetic board game Ökolopoly into the realm of reinforcement learning and evaluate various methods of handling large observation and action spaces. Large spaces pose a serious challenge to reinforcement learning and we hope our case study will provide valuable approaches to fellow researchers. Additionally we make the environment available to the scientific community with Open AI Gym compatible API.
  2. Improving Reinforcement Learning Efficiency with Auxiliary Tasks in Non-Visual Environments: A Comparison”, under the first authorship of Moritz Lange, is a thorough comparison of auxiliary tasks in a variety of control and robotic tasks, and shows how agents benefit from decoupled representation learning of auxiliary tasks in complex environments.

We are very grateful for this opportunity, look forward to hear other researchers’ advances in machine learning, and interesting discussions about current research topics.

We are delighted to announce that our article “Iterative Oblique Decision Trees Deliver Explainable RL Models” was accepted and is now part of the special issue “Advancements in Reinforcement Learning Algorithms” in the MDPI journal Algorithms (impact factor 2.2, CiteScore 3.7) .

Explainability in AI and RL (known as XAI and XRL) becomes increasingly important. In our paper we investigate several possibilities to replace complex “black box” deep reinforcement learning (DRL) models by intrinsically interpretable decision trees (DTs) which require orders of magnitudes fewer parameters. A highlight of our paper is that we find on seven classic control RL problems that the DTs achieve similar reward as the DRL models, sometimes even surpassing the reward of the DRL models. The key to this success is an iterative sampling method that we have developed.

In our work, we present and compare three different methods of collecting samples to train DTs from DRL agents. We test our approaches on seven problems including all classic control environments from Open AI Gym, LunarLander, and the CartPole-SwingUp challenge. Our iterative approach combining exploration of DTs and DRL agent’s predictions, in particular, is able to generate shallow, understandable, oblique DTs that solve the challenges and even outperform the DRL agents they were trained from. Additionally we demonstrate how, given their simpler structure and fewer parameters, DTs allow for inspection and insights, and offer higher degrees of explainability.
To readers interested in explainable AI and understandable reinforcement learning in particular, we recommend to take a look at our open-access article.

Decision surface MountainCarThe figure shows the decision surfaces of DRL models (1st column) and various DT models (2nd and 3rd column) on the environments MountainCar (upper row) and MountainCarContinuous (lower row). The little black dots visualize various episodes showing how MountainCar rolls back and forth in the valley until it finally reaches the goal on the mountain top (x=0.5). The DRL models exhibit more complicated decision surfaces, while the DT models reach the same performance (number in round brackets in the title) with simpler decision surfaces.