As mentioned in a previous blog post, we developed an iterative algorithm for training decision trees (DTs) from trained deep reinforcement learning (DRL) agents. The algorithm combines the simple structure of DTs and the predictive power of well-performing DRL agents. In our publication, we tested the idea on seven different control problems and successfully trained...
Presenting our research at the World Conference on XAI
After having participated in its debut last year, it was a special pleasure to visit the second edition of The World Conference on Explainable Artificial Intelligence (xAI2024). The conference was a full immersion into all aspects of explainable AI. The keynote speech by Prof. Fosca Giannotti about hybrid decision-making and the two panel discussions on...
Dataninja Closing Conference
Time flies... It wasn't that long ago (or at least it feels like it) that I wrote a blog post about the first Dataninja Retreat. Now we already held the closing conference of the Dataninja project. From Tuesday 25th to Thursday 27th we had the pleasure to enjoy three days of science and meetups at...
Visiting the AAAI Conference on Artificial Intelligence in Vancouver
In the last week of February, my RL3 Dataninja colleague Moritz Lange and I had the chance to visit the AAAI conference on AI 2024. Listening to Yann LeCun in person speak about the challenges of machine learning was inspiring and attending Moritz' presentation of our collaborative work "Interpretable Brain-Inspired Representations Improve RL Performance on...
Dataninja Retreat 2023
From September 6th to September 9th we held our last annual retreat of the Dataninja research training group in Krefeld. Alongside our invited speaker's talk (Dr. Alessandro Fabris from Max Planck Institute for Security and Privacy) about algorithmic fairness, each PhD candidate from the Dataninja projects presented their progress and current investigations. In my contribution...
Article Published in Special Issue of "Algorithms"
We are delighted to announce that our article “Iterative Oblique Decision Trees Deliver Explainable RL Models” was accepted and is now part of the special issue “Advancements in Reinforcement Learning Algorithms” in the MDPI journal Algorithms (impact factor 2.2, CiteScore 3.7) . Explainability in AI and RL (known as XAI and XRL) becomes increasingly important....
AI in Nuclear Power Plant Simulation: Steinmüller Engineering Award for Niklas Fabig
The operation of nuclear power plants (NPPs) is one of the most safety-critical tasks in industry. Prior to using AI methods in this area, it should be thoroughly investigated and evaluated via simulations, whether AI can learn (e.g.´, by reinforcment learning, RL) to power up and shut down a nuclear reactor and how well such...
Dataninja Retreat 2022
The second edition of the annual Dataninja-Retreat took place this September in Tecklenburg. The different Dataninja projects presented their proceedings, we had the pleasure of attending a lecture by Prof. Xiaoyi Jiang, and fresh PhD graduates of the KI-Starter-project kindly shared experiences and some tips from their recently concluded PhD-journey. Last but not least it...
Dataninja Retreat 2021
Between the third and fourth wave of the COVID-19 pandemic, we were in September 2021 lucky enough to hold in presence the annual retreat of the Dataninja research group, which was at the same time the group's first ever in-person meeting. The rich and balanced program included presentations of the different Dataninja projects by the...
More Transparency in Artificial Intelligence
(Image: Christoph J Kellner) For the research project (RL)^3, which is lead by Laurenz Wiskott (RUB) and Wolfgang Konen (TH Köln) and that is illustrated in the figure above, there is now a press release TH Köln available: https://www.th-koeln.de/hochschule/mehr-transparenz-bei-kuenstlicher-intelligenz_84905.php (sorry, in German only!). (RL)^3 is part of the graduate school Dataninja (Trustworthy AI for Seamless...