The rapid growth in global traffic and population has intensified challenges such as air pollution, road congestion, and accident rates, highlighting the need for intelligent transportation management systems equipped with automated traffic monitoring. This paper introduces a deep learning-based smart traffic surveillance system that addresses these challenges by performing key tasks of vehicle detection, classification, and tracking. The system preprocesses traffic images using Total Variation Denoising (TVD) for dynamic contrast adjustment, followed by segmentation via Entropy Rate Superpixel Segmentation (ERS) to cluster uniform regions and reduce image complexity. YOLOv9 is employed for vehicle detection, with subsequent modules handling classification and tracking. Feature extraction for classification utilizes Gray-Level Co-occurrence Matrix (GLCM) and Zernike Moments, while NASNet serves as the classifier. Vehicle counting is achieved through the Voronoi Tessellation algorithm, which uses appearance and motion features to track vehicles across frames, complemented by Farneback Optical Flow for tracking. Evaluated on the UAVDT and UAVID datasets, the system outperforms state-of-the-art methods, achieving detection precision of 0.914 and 0.932, tracking precision of 0.887 and 0.905, and classification accuracy of 91.60% and 92.80%, respectively, demonstrating its effectiveness in automated traffic monitoring.
Intelligent Transportation Surveillance via YOLOv9 and NASNet over Aerial Imagery
18.02.2025
1031386 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
ArXiv | 2024
|Automatic License Plate Detection Using YOLOv9
IEEE | 2024
|