Gaze estimation can be used for assessing the attention level of drivers. Current works predominantly focus on enhancing model accuracy, often overlooking the influence of input sample and label uncertainty. In this paper, we propose a framework for uncertainty modeling in driver gaze estimation via feature disentanglement, referred to as UnMoDE. Our approach begins by extracting facial information into distinct feature spaces using an asymmetric dual-branch encoder to obtain gaze features. Subsequently, a multi-layer perceptron (MLP) is employed to project gaze features and labels into an embedding space, representing them as Gaussian distributions. The uncertainty is described using a covariance matrix. Random sampling is applied to derive samples from the gaze embedding distribution to estimate the most probable embedding representation. This estimated representation is then used to regress the gaze direction and is projected back into the gaze feature space, along with identity information, to facilitate facial reconstruction. Extensive experimental evaluations demonstrate that UnMoDE significantly outperforms baseline and state-of-the-art methods on the latest benchmark datasets collected for drivers, particularly in reducing the number of samples with significant errors.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    UnMoDE: Uncertainty Modeling for Driver Gaze Estimation via Feature Disentanglement


    Contributors:
    Hu, Daosong (author) / Cui, Mingyue (author) / Huang, Kai (author)


    Publication date :

    2025-07-01


    Size :

    1787239 byte




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English



    DRIVER DISTRACTION FROM UNCERTAIN GAZE ESTIMATION

    ALMÁSI PÉTER / OLLESSON NIKLAS / SANTIAGO ANJIN GABRIEL ALEXANDER et al. | European Patent Office | 2024

    Free access

    Feature Disentanglement of Robot Trajectories

    Valdenegro-Toro, Matias / Harnack, Daniel / Wöhrle, Hendrik | ArXiv | 2021

    Free access

    Young Driver Gaze (YDGaze): Dataset for driver gaze analysis

    Ceven, Suleyman / Albayrak, Ahmet / Bayir, Raif | IEEE | 2022


    GSA-Gaze: Generative Self-adversarial Learning for Domain Generalized Driver Gaze Estimation

    Han, Hongcheng / Tian, Zhiqiang / Liu, Yuying et al. | IEEE | 2023


    ESTIMATION OF DRIVER STATE BASED ON EYE GAZE

    HECHT RON M / ORON SHAUL / TSIMHONI OMER et al. | European Patent Office | 2024

    Free access