The safe deployment of autonomous vehicles (AV s) in real world scenarios requires that AV s are accountable. One way of ensuring accountability is through the provision of explanations for what the vehicles have ‘seen’, done and might do in a given scenario. Intelligible explanations can help developers and regulators to assess AV s' behaviour, and in turn, uphold accountability. In this paper, we propose an interpretable (tree-based) and user-centric approach for explaining autonomous driving behaviours. In a user study, we examined different explanation types instigated by investigatory queries. We conducted an experiment to identify scenarios that require explanations and the corresponding appropriate explanation types for such scenarios. Our findings show that an explanation type matters mostly in emergency and collision driving conditions. Also, providing intelligible explanations (especially contrastive types) with causal attributions can improve accountability in autonomous driving. The proposed interpretable approach can help realise such intelligible explanations with causal attributions.
Towards Accountability: Providing Intelligible Explanations in Autonomous Driving
2021-07-11
1211912 byte
Conference paper
Electronic Resource
English
TOWARDS ACCOUNTABILITY: PROVIDING INTELLIGIBLE EXPLANATIONS IN AUTONOMOUS DRIVING
British Library Conference Proceedings | 2021
|Explanations in Autonomous Driving: A Survey
IEEE | 2022
|