Multi-object tracking in autonomous vehicles uses both camera data and LiDAR data for training, but not LiDAR data at query time. Thus, no LiDAR sensor is on a piloted autonomous vehicle. Example systems and methods rely on camera 2D object detections alone, rather than 3D annotations. Example systems/methods utilize a single network that is given a camera image as input and can learn both object detection and dense depth in a multimodal regression setting, where the ground truth LiDAR data is used only at training time to compute depth regression loss. The network uses the camera image alone as input at test time (i.e., when deployed for piloting an autonomous vehicle) and can predict both object detections and dense depth of the scene. LiDAR is only used for data acquisition and is not required for drawing 3D annotations or for piloting the vehicle.
Framework For 3D Object Detection And Depth Prediction From 2D Images
08.09.2022
Patent
Elektronische Ressource
Englisch
IPC: | G06N COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS , Rechnersysteme, basierend auf spezifischen Rechenmodellen / B60W CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION , Gemeinsame Steuerung oder Regelung von Fahrzeug-Unteraggregaten verschiedenen Typs oder verschiedener Funktion / G06T Bilddatenverarbeitung oder Bilddatenerzeugung allgemein , IMAGE DATA PROCESSING OR GENERATION, IN GENERAL / G01S RADIO DIRECTION-FINDING , Funkpeilung |
Framework for 3D object detection and depth prediction from 2D images
Europäisches Patentamt | 2025
|A Framework For 3D Object Detection And Depth Prediction From 2D Images
Europäisches Patentamt | 2025
|Multiresolution Object-of-Interest Detection for Images with Low Depth of Field
British Library Conference Proceedings | 1999
|A Supervised Learning Framework for Generic Object Detection in Images
British Library Conference Proceedings | 2005
|