Research on Perspicuous Automated Decisions

Artificial neural networks are being proposed for automated decision making under uncertainty in many visionary contexts, including high-stake tasks such as navigating autonomous cars through dense traffic.

Against this background, it is imperative that the decision making entities meet central societal desiderata regarding dependability, perspicuity, explainability, and robustness. Sequential decision making problems under uncertainty are typically captured formally as variations of Markov decision processes (MDPs).

This website is centered around the Racetrack benchmarks and extensions thereof, that altogether connect the autonomous driving challenge to the modelling world of MDPs. We provide tools and discuss approaches to study the dependability and robustness of NN-based decision entities, which in turn are based on state-of-the-art NN learning techniques. We demonstrate here that this approach can be regarded as providing laboratory conditions for a systematic, structured and extensible comparative analysis of NN behavior, of NN learning performance, as well as of NN verification and analysis techniques.

The core problem is the navigation of a vehicle on a gridded 2D-track from start to goal, as instructed by a neural network and subject to probabilistic disturbances (also called noise, in this case the action suggested by the NN is ignored) of the vehicle control. This problem family, known as the Racetrack in the AI community is arguably very far away from the true challenges of automated driving, but (i) it provides a common formal ground for basic studies on NN behavioral properties, (ii) it is easily scalable, (iii) it is extensible in a natural manner, namely by undoing some of the abstractions listed above (which we are doing already), and (iv) it is made available to the scientific community together with a collection of supporting tools. First impressions of the analysis of Racetrack benchmarks with our methods and tools can be inspected below. More details are provided on the specific pages accessible through the menu on the left.

This topic has been covered in an invited talk by Holger Hermanns entitled "Lab Conditions for Research on Explainable Automated Decisions" at SETTA 2020.

The video is available for download here.

Latest News

Paper Lab Conditions for Research on Explainable Automated Decisions at TAILOR 2020

Our position paper “Lab Conditions for Research on Explainable Automated Decisions” has been accepted at TAILOR - Foundations of Trustworthy AI - Integrating Learning, Optimization and Reasoning, 2020.