This study introduces a new method to enhance ADAS's safety and error prevention capabilities in intelligent vehicles. We address the significant computational and memory demands required for real-time video processing by leveraging BDD100 K, KITTI, CityScape, and Waymo datasets. Our proposed hardware-software co-design integrates an MPSoC-FPGA accelerator for real-time multi-learning models. Our experimental results exhibit that, despite an increase in ADAS tasks and model parameters compared to the state-of-the-art studies, our model achieves 24,715 GOP performance with 4% lower power consumption (6.920 W) and 18.86% less logic resource consumption. The model processes highway scenes at 22.45 FPS and attains 50.06% mAP for object detection, 57.05% mIoU for segmentation, 43.76% mIoU for lane detection, 81.63% IoU for drivable area segmentation, and 9.78% SILog error for depth estimation. These findings confirm the system's effectiveness, reliability, and adaptability for ADAS applications and represent a significant advancement in intelligent vehicle technology, with the potential for further improvements in accuracy and memory efficiency.
Real-Time Multi-Learning Deep Neural Network on an MPSoC-FPGA for Intelligent Vehicles: Harnessing Hardware Acceleration With Pipeline
IEEE Transactions on Intelligent Vehicles ; 9 , 6 ; 5021-5032
2024-06-01
3376151 byte
Article (Journal)
Electronic Resource
English
Acceleration Based Intelligent Real-Time Road Identification for Connected Vehicles
Springer Verlag | 2022
|Acceleration Based Intelligent Real-Time Road Identification for Connected Vehicles
British Library Conference Proceedings | 2022
|