Humans, interacting with automated machines, expect certain behaviors: either because they have experienced this behavior (e.g., driving) or because they build such expectations from the machine (e.g., a user would expect from an AI based personal assistant to recognize all the sentences they might tell in any accent). In reality, these advanced AI systems might not behave perfectly or their optimal decisions might also differ from the subjective optimal decisions a human user might expect. This becomes a challenging problem when considering AI decision making algorithms, controlling the complex behaviors of autonomous vehicles, affected by their uncertain environments and their own sensing suites. This paper presents results from two large, on-line user studies, run in simulated autonomous driving scenarios. Our goal was to assess users’ trust in the automated behaviors, presented with different explanations and HMI solutions. We found that specific explanations, considering the risk of a driving scenario and what the vehicle is planning to do can reduce discomfort and increase understanding of an automated driving maneuver. We also present a data-driven solution to infer an explanation automatically and probabilistically that is the most suitable for a driving context and user’s group according to the data analysis and trust measures examined.
Trusting Explainable Autonomous Driving: Simulated Studies
2022 IEEE Intelligent Vehicles Symposium (IV) ; 1255-1260
2022-06-05
798194 byte
Conference paper
Electronic Resource
English