The progress achieved in transportation systems and artificial intelligence has amplified the use of intelligent transportation systems and Autonomous Vehicles (AVs). Indeed, AV systems have attracted much research in recent years, which enabled multiple autonomous driving tasks, including scene understanding, visual prediction, decision-making, and communication. The latter may create a bottleneck in low-resource autonomous driving systems that send the collected images to remote edge servers for processing and decision-making. Such an issue can be addressed by compressing the images in the AV and ensuring a good-quality reconstruction at the edge. In this paper, we propose a deep neural network for Compressed Sensing (CS) based image reconstruction that integrates image semantic perception to improve the reconstruction process for visual prediction tasks. The reconstruction process is optimized using a perception-inspired loss in an end-to-end model learning process. The trained model is evaluated on autonomous driving car datasets. Obtained experimental results outperform state-of-the-art approaches in terms of both image reconstruction quality and processing time. Finally, we perform semantic urban scene segmentation on the reconstructed image to evaluate reconstruction quality for visual task prediction. Obtained results on three semantic urban scene datasets demonstrate the efficiency of the proposed approach.
PSCS-Net: Perception Optimized Image Reconstruction Network for Autonomous Driving Systems
IEEE Transactions on Intelligent Transportation Systems ; 24 , 2 ; 1564-1579
2023-02-01
3327968 byte
Article (Journal)
Electronic Resource
English
Auction-based cooperative perception for autonomous and semi-autonomous driving systems
European Patent Office | 2023
|Auction-Based Cooperative Perception for Autonomous and Semi-Autonomous Driving Systems
European Patent Office | 2022
|