To take the advantages of both stereo cameras and radar, this paper proposes a fusion approach to accurately estimate the location, size, pose and motion information of a threat vehicle with respect to the host from observations obtained by both sensors. To do that, we first fit the contour of a threat vehicle from stereo depth information, and find the closest point on the contour from the vision sensor. Then the fused closest point is obtained by fusing radar observations and the vision closest point. Next by translating the fitted contour to the fused closest point, the fused contour is obtained. Finally the fused contour is tracked by using the rigid body constraints to estimate the location, size, pose and motion of the threat vehicle. Experimental results from both the synthetic data and the real world road test data demonstrate the success of the proposed algorithm.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Collision sensing by stereo vision and radar sensor fusion


    Contributors:


    Publication date :

    2008-06-01


    Size :

    580489 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Collision Sensing by Stereo Vision and Radar Sensor Fusion

    Shunguang Wu, / Decker, S. / Peng Chang, et al. | IEEE | 2009



    Collision Sensing by Stereo Vision and Radar Sensor Fusion

    Wu, S. / Decker, S. / Chang, P. et al. | British Library Conference Proceedings | 2009


    Collision Sensing by Stereo Vision and Radar Sensor Fusion

    Veeraraghavan, H | Online Contents | 2009


    Collision Sensing by Stereo Vision and Radar Sensor Fusion

    Wu, S. / Decker, S. / Chang, P. et al. | British Library Conference Proceedings | 2008