Recent advances in machine learning research could significantly alter the railroad industry by deploying fully autonomous trains. To achieve effective interaction between self-driving trains and the environment, an accurate long-range railway detection should be provided. In this paper, we propose a framework for the rail tracks segmentation on high-resolution images ($2168\times 4096$). The announced approach accelerates inference speed 6 times, by using two neural networks. The proposed architecture and its training approach provide a long-range railway segmentation within 150 meters, achieving 20 fps. Also, we propose an auxiliary algorithm detecting possible paths among all the found ones. To determine which data labeling approach has a higher impact, additional experiments were performed. The proposed framework provides a balanced tradeoff between computing efficiency and performance in the railroad segmentation problem.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Railroad semantic segmentation on high-resolution images


    Beteiligte:
    Belyaev, Sergey (Autor:in) / Popov, Igor (Autor:in) / Shubnikov, Vladislav (Autor:in) / Popov, Pavel (Autor:in) / Boltenkova, Ekaterina (Autor:in) / Savchuk, Daniil (Autor:in)


    Erscheinungsdatum :

    2020-09-20


    Format / Umfang :

    545079 byte




    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch




    Efficient Semantic Segmentation with Hyperspectral Images

    Peña, Fernando / Vidal Aguilar, Pilar / Suarez Gracia, Darío et al. | Springer Verlag | 2022


    SEMANTIC SEGMENTATION OF ROADS IN HIGH-RESOLUTION SATELLITE IMAGERY

    Al-Saad, Mina / Aburaed, Nour / Mansoori, Saeed Al et al. | TIBKAT | 2021


    Transformer-based semantic segmentation for large-scale building footprint extraction from very-high resolution satellite images

    Gibril, Mohamed Barakat A. / Al-Ruzouq, Rami / Shanableh, Abdallah et al. | Elsevier | 2024