Position estimation of multiple objects in a 3D environment poses a challenging task, even more so in the presence of occlusions due to infrastructure. In this paper we present a method to accurately localize up to 10 moving pedestrians by fusing the output of a Sparsity Driven Detector with volumes generated by a Shape-from-Silhouette approach. We also show how occlusion information from a 3D map of the environment can be integrated into our algorithm to further improve performance. We investigate the influence of different camera heights and image sizes on the optimization problem and demonstrate real-time capability for certain configurations. Additionally, our code is made publicly available under an open source license.1
Fusing Shape-from-Silhouette and the Sparsity Driven Detector for Camera-Based 3D Multi-Object Localization with Occlusions
2019-10-01
782000 byte
Conference paper
Electronic Resource
English
Shape-from-Silhouette with Two Mirrors and an Uncalibrated Camera
British Library Conference Proceedings | 2006
|Shape from inconsistent silhouette
British Library Online Contents | 2008
|Structural damage localization using wavelet-based silhouette statistics
Online Contents | 2009
|Octree-Based Fusion of Shape from Silhouette and Shape from Structured Light
British Library Conference Proceedings | 2002
|