This article presents the first subject-specific head pose estimation approach using only one frequency-modulated continuous wave radar data frame. Specifically, the proposed method incorporates a deep learning framework to estimate head pose rotation and orientation frame-by-frame by combining a convolutional neural network operating on range-angle radar plots and a PeakConv network. The proposed method is validated with an in-house collected dataset, including annotated head movements that varied in roll, pitch, and yaw, and these were recorded in two different indoor environments. It is shown that the proposed model can estimate head poses with a relatively small error of approximately 6.7°–14.4° for all rotational axes and is capable of generalizing to unseen, new environments when trained in one scenario (e.g., lab) and tested in another (e.g., office), including in the cabin of a car.
Capturing Head Poses Using FMCW Radar and Deep Neural Networks
IEEE Transactions on Aerospace and Electronic Systems ; 61 , 3 ; 6748-6759
2025-06-01
8247453 byte
Article (Journal)
Electronic Resource
English
British Library Conference Proceedings | 1993
|Tema Archive | 1992
|