In this paper we describe our SmartCar testbed: a real-time data acquisition system and a machine learning framework for modeling and recognizing driver maneuvers at a tactical level, with special emphasis on how the context affects the driver's performance. The perceptual input is multimodal: four video signals capture the contextual traffic, the driver's head and the driver's viewpoint; and a real-time data acquisition system records the car's brake, gear, steering wheel angle, speed and acceleration throttle signals. Over 70 drivers have driven the SmartCar for 1.25 hours in the greater Boston area. Graphical models, HMM and coupled HMM, have been trained using the experiment driving data to create models of seven different driver maneuvers: passing, changing lanes right and left, turning right and left, starting and stopping. We show that, on average, the predictive power of our models is of 1 second before the maneuver starts taking place. Therefore, these models would be essential to facilitate operating mode transitions between driver and driver assistance systems, to prevent potential dangerous situations and to create more realistic automated cars in car simulators.
Graphical models for driver behavior recognition in a SmartCar
2000-01-01
564274 byte
Conference paper
Electronic Resource
English
Graphical Models for Driver Behavior Recognition in a SmartCar
British Library Conference Proceedings | 2000
|Automotive engineering | 2013
|Springer Verlag | 2015
|