A method estimating a depth of an environment includes generating, via a cross-attention model, a cross-attention cost volume based on a current image of the environment and a previous image of the environment in a sequence of images. The method also includes generating, via the cross-attention model, a depth estimate of the current image based on the cross-attention cost volume, the cross-attention model having been trained using a photometric loss associated with a single-frame depth estimation model. The method further includes controlling an action of the vehicle based on the depth estimate.


    Access

    Download


    Export, share and cite



    Title :

    PHOTOMETRIC MASKS FOR SELF-SUPERVISED DEPTH LEARNING


    Contributors:

    Publication date :

    2024-07-04


    Type of media :

    Patent


    Type of material :

    Electronic Resource


    Language :

    English


    Classification :

    IPC:    B60W CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION , Gemeinsame Steuerung oder Regelung von Fahrzeug-Unteraggregaten verschiedenen Typs oder verschiedener Funktion / G06T Bilddatenverarbeitung oder Bilddatenerzeugung allgemein , IMAGE DATA PROCESSING OR GENERATION, IN GENERAL



    Self-occlusion masks to improve self-supervised monocular depth estimation in multi-camera settings

    GUIZILINI VITOR / AMBRUS RARES ANDREI / GAIDON ADRIEN DAVID et al. | European Patent Office | 2024

    Free access

    SELF-OCCLUSION MASKS TO IMPROVE SELF-SUPERVISED MONOCULAR DEPTH ESTIMATION IN MULTI-CAMERA SETTINGS

    GUIZILINI VITOR / AMBRUS RARES ANDREI / GAIDON ADRIEN DAVID et al. | European Patent Office | 2022

    Free access

    SELF EXTRINSIC SELF-CALIBRATION VIA GEOMETRICALLY CONSISTENT SELF-SUPERVISED DEPTH AND EGO-MOTION LEARNING

    KANAI TAKAYUKI / CAMPAGNOLO GUIZILINI VITOR / AMBRUS RARES A et al. | European Patent Office | 2025

    Free access


    EDS-Depth: Enhancing Self-Supervised Monocular Depth Estimation in Dynamic Scenes

    Yu, Shangshu / Wu, Meiqing / Lam, Siew-Kei et al. | IEEE | 2025