As robots are becoming a more significant part of humans’ daily life, there is a challenge to bridge the gap between robots’ actions and humans’ understanding of what robots are doing and how they make their decisions. We present an approach to local navigation explanation based on Local Interpretable Model-agnostic Explanations (LIME), a popular approach from the Explainable Artificial Intelligence (XAI) community for explaining individual predictions of black-box models. We show how LIME can be applied to a robot’s local path planner. We experimentally evaluate the explanation method’s runtime, quality, and robustness, and discuss implications for the robotic domain.
Explaining Local Path Plans Using LIME
Mechan. Machine Science
International Conference on Robotics in Alpe-Adria Danube Region ; 2022 ; Klagenfurt, Austria June 08, 2022 - June 10, 2022
23.04.2022
8 pages
Aufsatz/Kapitel (Buch)
Elektronische Ressource
Englisch
Explaining Local Path Plans Using LIME
TIBKAT | 2022
|Knowledge representation issues for explaining plans
NTRS | 1988
|Coverage of travel plans (Green Transport Plans) in local transport plans
British Library Conference Proceedings | 2000
|British Library Online Contents | 2000