In response to the current challenge of simultaneously balancing detection speed and accuracy in vehicle detection, this study presents a research on an improved YOLOv5s-based vehicle detection method. By enhancing the existing model, we optimize the loss function of the training model and incorporate an attention mechanism. Additionally, a new dataset is established and trained. While ensuring swift vehicle recognition, we further enhance detection precision. Furthermore, a vehicle tracking module is introduced to the output of the vehicle detection, enabling prediction of the driving behavior of preceding vehicles. Through this research methodology, real-time vehicle detection and tracking are successfully achieved, yielding significant results when applied to vehicle-following systems. This approach not only enables rapid and accurate vehicle identification but also provides real-time monitoring of the driving conditions of vehicles ahead, issuing timely warnings to aid drivers in making necessary adjustments and thereby reducing the probability of rear-end accidents.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Research on Vehicle Detection Method Based on Improved YOLOv5s


    Beteiligte:
    Ma, Liangliang (Autor:in) / Zhong, Runlu (Autor:in) / Shi, Xiaohong (Autor:in) / Yang, Peng (Autor:in)


    Erscheinungsdatum :

    20.12.2024


    Format / Umfang :

    1262424 byte




    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Research on Road Vehicle Detection based on improved Yolov5s

    Shao, Lei / Fan, Zhenqiang / Li, Ji et al. | British Library Conference Proceedings | 2022


    Tank Armored Vehicle Target Detection Based on Improved YOLOv5s

    Li, Xinwei / Mao, Yuxin / Yu, Jianjun et al. | IEEE | 2024


    UAV target detection algorithm based on improved YOLOv5s

    Zhang, Tao / Wang, Fenmei / Chen, Dongxu et al. | SPIE | 2023