The challenge of developing a robust, real-time driver gaze classification system is that it has to handle difficult edge cases that arise in real-world driving conditions: extreme lighting variations, eyeglass reflections, sunglasses and other occlusions. We propose a single-camera end-toend framework for classifying driver gaze into a discrete set of regions. This framework includes data collection, semi-automated annotation, offline classifier training, and an online real-time image processing pipeline that classifies the gaze region of the driver. We evaluate an implementation of each component on various subsets of a large onroad dataset. The key insight of our work is that robust driver gaze classification in real-world conditions is best approached by leveraging the power of supervised learning to generalize over the edge cases present in large annotated on-road datasets.
A Framework for Robust Driver Gaze Classification
Sae Technical Papers
SAE 2016 World Congress and Exhibition ; 2016
05.04.2016
Aufsatz (Konferenz)
Englisch
A Framework for Robust Driver Gaze Classification
British Library Conference Proceedings | 2016
|Driver Gaze Behavior in Critical Cornering
SAE Technical Papers | 2006
|