Given the critical need for more reliable autonomous driving systems, explainability has become a key focus within the research community. In autonomous driving models, even minor perception differences can significantly influence the decision-making process, and this impact often diverges markedly from human cognition. However, understanding the specific reasons why a model decides to stop or keep forward remains a significant challenge. This paper presents an attribution-guided visualization method aimed at exploring the triggers behind decision shifts, providing clear insights into the underlying “why” and “why not” of such decisions. We propose the cumulative layer fusion attribution method that identifies the parameters most critical to decision-making. These attributions are then used to inform the visualization optimization by applying attribution-guided weights to crucial generation parameters, ensuring that decision changes are driven only by modifications to critical information. Furthermore, we develop an indirect regularization method that increases visualization quality without necessitating additional hyperparameters. Experiments on large datasets demonstrate that our method produces insightful visualization explanations and outperforms state-of-the-art methods in both qualitative and quantitative evaluations.
Exploring Decision Shifts in Autonomous Driving With Attribution-Guided Visualization
IEEE Transactions on Intelligent Transportation Systems ; 26 , 3 ; 4165-4177
01.03.2025
3722728 byte
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
AUTONOMOUS DRIVING DECISION PLANNING AND AUTONOMOUS VEHICLE
Europäisches Patentamt | 2024
|Autonomous Driving Cars: Decision-Making
Springer Verlag | 2020
|Autonomous Driving Lane Change Decision-making
DataCite | 2024
|