Knowing what the driver is doing inside a vehicle is essential information for all stages of vehicle automation. For example it can be used for adaptive warning strategies in combination with an advanced driver assistance systems system, for predicting the response time to take back the control of a partially automated vehicle, or ensuring the driver is ready to manually drive a highly automated vehicle in the future. We present a system for driver activity recognition based on image sequences of an in-cabin time-of-flight camera. Our dataset includes actions such as entering and leaving a car or driver object interactions such as using a phone or drinking. In the first stage, we localize body key points of the driver. In the second stage, we extract image regions around the localized hands. These regions and the determined 3D body key points are used as the input to a recurrent neural network for driver activity recognition. With a mean average precision of 0.85 we reach better classification rates than approaches relying only on body key points or images.
Action and Object Interaction Recognition for Driver Activity Classification
01.10.2019
2592171 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Driver-Skeleton: A Dataset for Driver Action Recognition
IEEE | 2021
|Driver Activity Recognition by Fusing Multi-object and Key Points Detection
Springer Verlag | 2024
|Open Set Driver Activity Recognition
IEEE | 2020
|OPEN SET DRIVER ACTIVITY RECOGNITION
British Library Conference Proceedings | 2020
|