Increasing autonomy and anticipation of the prevalence of human-robot teams has lead to focus on how humans trust robots. Additionally, some researchers argue that robots should be capable of intelligent disobedience, either in situations of epistemic misalignment (i.e., situational misunderstanding) or normative conflict (i.e., when obedience to normative principle supersedes compliance). Little work exists on the effects of robot disobedience on trust, and what does exist focuses primarily on normative conflict scenarios. Here we present novel results showing differences in trust evaluations between robots that exhibit strict obedience to commands versus those that exhibit intelligent disobedience. Specifically, we report on a vignette-based study designed to test the effect of intelligent disobedience by a robotic agent on evaluations of trust in situations of epistemic misalignment.
Trusting a Disobedient Robot: Rejecting a Command for Constructive Reasons Improves Evaluations of Trust
04.03.2025
1027215 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Online Contents | 2000
Online Contents | 2000
Trusting a Humanoid Robot : Exploring Personality and Trusting Effects in a Human-Robot Partnership
BASE | 2020
|Tema Archiv | 2008
|Disobedient Officers in the Royal Navy, about 1680–1720
Taylor & Francis Verlag | 2022
|