The growing use of mobile vehicles, robots, and unmanned aerial vehicles has spurred indoor autonomous driving research. These compact vehicles face resource limitations, presenting challenges for real-time execution of deep neural networks (DNNs). While vision-based DNNs offer robust driving capabilities, they demand significant computational power, leading to higher battery consumption and cost concerns associated with GPUs. Lightweight mobile GPUs alleviate some issues but often extend inference time, sacrificing accuracy. Addressing these challenges, this research proposes an autonomous driving pipeline for mobile vehicles, leveraging cloud-based inference, a robust detection model, and end-to-end steering prediction. The edge-server-based pipeline achieves 28 times faster inference for object detection using YOLOv4, with an 86% mAP@0.50. This system can effectively execute multiple DNNs and algorithms in real-time driving scenarios. Additionally, the study explores data augmentation techniques, notably mosaic augmentation, enhancing the YOLOv4 model’s performance and robustness. It also contributes to the development of lightweight training and deployment pipelines for steering prediction models based on imitation learning.
Cloud-Based Multi-class Traffic Object Detection Toward Autonomous Vehicle
Smart Innovation, Systems and Technologies
International Conference on Information and Communication Technology for Intelligent Systems ; 2024 ; Las Vegas, NV, USA May 22, 2024 - May 23, 2024
29.09.2024
11 pages
Aufsatz/Kapitel (Buch)
Elektronische Ressource
Englisch
Remote inference , Deep learning , Mobile vehicle , Edge server , Steering angle prediction , Traffic-sign detection , Autonomous driving , Edge device , Imitation learning , Behavioral cloning , YOLOv4 , Autonomous mobile robot Engineering , Computational Intelligence , Artificial Intelligence , Communications Engineering, Networks , Cyber-physical systems, IoT , Professional Computing