On two-lane rural roads, a large number of overtaking accidents tend to happen. Those cause many serious casualties and fatalities. In many cases, inaccurate assessment of the traffic situation is identified as the major cause. Hence, the development of a driver assistance concept for those scenarios promises a high safety benefit. This paper shows the sensory and data-fusion approach to a system which provides this assistance function. The level of information about the car's environment, which is required for overtaking assistance, depends on the phase of the overtaking maneuver. In early stages, i.e. when the overtaking vehicle is in the situation just before the initial lane change, it is only necessary to get information about oncoming cars in the distance. For late stages in the scenario, i.e. when the overtaking speed is too low, dangerous situations can arise due to the fact that the gap in front of the car to be overtaken cannot be reached any more. In this case, it is necessary to calculate an evasion path, based on the perception of unoccupied space in front of the overtaking car. A fusion of different automotive sensors is proposed in order to cover all parts of the overtaking scenario in the system's perception: Information about independently moving objects in front of the car is gained from a radar-device by exploiting the Doppler shift. Moreover, we employ a CMOS-camera sensor. Different algorithms are run on the camera's video stream: a texture-based free space detector as well as an object detection algorithm. Details ofthose algorithms are shown in further sections ofthe paper. The proposed approach fuses object information from raw radar object data and the output of a video based object detection algorithm. As a result of this mid-level fusion, a list of moving objects in the whole range of the targeted field of view is obtained. For the free space part, a typical occupancy grid representation of the front car environment is employed for shorter distances in the field of view. This area is relevant for evasion maneuvers. The grid is filled by the camera free-space detection and is corrected with the known objects from the objectlist. Thus, a high-level grid fusion is obtained. In particular, it is shown that the fusion of both sensor inputs is beneficial First, it is possible to detect oncoming vehicles from a relatively high range with the radar device, whereas secondly, object detection from video frames becomes increasingly difficult for distant cars. In close range, both sensors benefit from the fusion of multiple cues. False positive detections can be filtered out and video object detections allow for an improved estimation of other vehicles' widths. Experimental results on real world data which has been recorded with a typical onboard system will be given in the results section.


    Access

    Access via TIB

    Check availability in my library


    Export, share and cite



    Title :

    Multi-level sensorfusion and computer-vision algorithms within a driver assistance system for avoiding overtaking accidents


    Additional title:

    Sensorfusion auf mehreren Ebenen und Algorithmus für das maschinelle Sehen in einem Fahrerassistenzsystem zur Vermeidung von Unfällen beim Überholen


    Contributors:


    Publication date :

    2008


    Size :

    14 Seiten, 8 Bilder, 2 Tabellen, 28 Quellen


    Type of media :

    Conference paper


    Type of material :

    Print


    Language :

    English






    Sensorfusion as key technology for future driver assistance systems

    Dickmann, J. / Appenrodt, N. / Lohlein, O. et al. | British Library Conference Proceedings | 2008


    SENSORFUSION

    SITHIRAVEL RAJIV / LAPORTE DAVID / CAREY KYLE J | European Patent Office | 2020

    Free access