Ground truth data may be too sparse to supervise training of a machine-learned (ML) model enough to achieve an ML model with sufficient accuracy/recall. For example, in some cases, ground truth data may only be available for every third, tenth, or hundredth frame of raw data. Training an ML model to detect a velocity of an object when ground truth data for training is sparse may comprise training the ML model to predict a future position of the object based at least in part on image, radar, and/or lidar data (e.g., for which no ground truth may be available). The ML model may be altered based at least in part on a difference between ground truth data associated with a future time and the future position.


    Access

    Download


    Export, share and cite



    Title :

    Object velocity detection from multi-modal sensor data


    Contributors:

    Publication date :

    2023-04-18


    Type of media :

    Patent


    Type of material :

    Electronic Resource


    Language :

    English


    Classification :

    IPC:    B60W CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION , Gemeinsame Steuerung oder Regelung von Fahrzeug-Unteraggregaten verschiedenen Typs oder verschiedener Funktion / G01S RADIO DIRECTION-FINDING , Funkpeilung / G05D SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES , Systeme zum Steuern oder Regeln nichtelektrischer veränderlicher Größen



    Multi-Modal 3D Object Detection by Box Matching

    Liu, Zhe / Ye, Xiaoqing / Zou, Zhikang et al. | IEEE | 2024


    MLF3D: Multi-Level Fusion for Multi-Modal 3D Object Detection

    Jiang, Han / Wang, Jianbin / Xiao, Jianru et al. | IEEE | 2024


    Multi-Sensor Object Detection

    Zhang, Xinyu / Li, Jun / Li, Zhiwei et al. | Springer Verlag | 2023


    LOOP CLOSURE USING MULTI-MODAL SENSOR DATA

    RAMANATHAN NARAYANAN / MEYER TIMON / TOUMIER GLENN et al. | European Patent Office | 2023

    Free access

    Multi-Modal Sensor Fusion and Object Tracking for Autonomous Racing

    Karle, Phillip / Fent, Felix / Huch, Sebastian et al. | IEEE | 2023