Vehicle control in Autonomous Car requires the following command to make sure that the car can accomplish a specific task, such as taking a turn, stop on the traffic light, following lanes, and changing lanes. This serial command indicates that a self-driving car should not be addressed as a context-based problem that theoretically needs a temporal system that can accommodate multiple frames. Based on this added complexity of the problem, we propose a network that can accommodate the sequential input of images. Thus, we apply a time distributed model of Convolutional Neural Network (CNN), to recognize a visual problem, followed by LSTM that can capture temporal state dependencies. By modifying the Carla environment, we can capture frame per frame images with detailed information of throttling, speed, steering angle, brake, and some states such as direction, speed limit, and traffic light state. We use the Carla control agent so that it can automatically capture all of the images from the camera and those of information. We demonstrate that this rough approach can perform well in the Carla environment with moderate dense traffic. It can reach the destination faster than the ground truth and standard convolution model in just 93.978 seconds. Although the driver agent performance is a bit rough with around 13.27 of speed above score, it shows a better steering control, which means better stability. Keywords: Time Distributed, LSTM, CNN, Carla, Autonomous Car References Chen, A. Seff, A. Kornhauser, and J. Xiao, “Deepdriving: Learning affordance for direct perception in autonomous driving,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. Huval, T. Wang, S. Tandon, J. Kiske, W. Song, J. Pazhayampallil, M. Andriluka, P. Rajpurkar, T. Migimatsu, R. Cheng-Yue et al., “An empirical evaluation of deep learning on highway driving,” arXiv preprint arXiv:1504.01716, 2015. Gurghian, T. Koduri, S. V. Bailur, K. J. Carey, and V. N. Murali, Deeplanes: End-to-end lane position estimation using deep neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2016, pp. 38–45. Geiger, M. Lauer, C. Wojek, C. Stiller, and R. Urtasun, “3d traffic scene understanding from movable platforms,” IEEE transactions on pattern analysis and machine intelligence, vol. 36, no. 5, pp. 1012–1025, 2014. Zhang, A. Geiger, and R. Urtasun, “Understanding high-level semantics by modeling traffic patterns,” in Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 3056–3063. Yang, Y. Zhang, J. Yu, J. Cai, and J. Luo, “End-to-end Multi-Modal Multi-Task Vehicle Control for Self-Driving Cars with Visual Perceptions,” arXiv:1801.06734v2, 2018. Bojarski, et al., “End to end learning for self-driving cars,” arXiv preprint arXiv:1604.07316, 2016. Xu, Y. Gao, F. Yu, and T. Darrell, “End-to-end learning of driving models from large-scale video datasets,” arXiv preprint arXiv:1612.01079, 2016. Glasmachers, “Limits of end-to-end learning,” arXiv preprint arXiv:1704.08305, 2017. C. Serban, E. Poll, and J. Visser, “A Standard Driven Software Architecture for Fully Autonomous Vehicles,” 2018 IEEE International Conference on Software Architecture Companion (ICSA-C), IEEE, 2018. A. Pomerleau, “Alvinn: An autonomous land vehicle in a neural network,” in Advances in neural information processing systems, 1989, pp. 305–313. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105. Bojarski, et al., “Explaining how a deep neural network trained with end-to-end learning steers a car,” arXiv 2017, arXiv:1704.07911. Codevilla, M. Muller, A. Lopez, V. Koltun, and A. Dosovitskiy, “End-to-end Driving via Conditional Imitation Learning,” International Conference on Robotics and Automation (ICRA), 2018. Chowdhuri, T. Pankaj, and K. Zipser, “MultiNet: Multi-Modal Multi-Task Learning for Autonomous Driving,” arXiv:1709.05581, 2019. Mehta, S. Adithya, and S. Anbumani, “Learning end-to-end autonomous driving using guided auxiliary supervision,” arXiv 2018, arXiv:1808.10393. Dosovitskiy, G. Ros, F. Codevilla, A. L´opez, and V. Koltun, “CARLA: An open urban driving simulator,” In Conference on Robot Learning (CoRL), 2017.


    Access

    Download


    Export, share and cite



    Title :

    End-to-End Time Distributed Convolution Neural Network Model for Self Driving Car in Moderate Dense Environment


    Contributors:

    Publication date :

    2021-07-07


    Remarks:

    doi:10.29122/jtike.v2i1.4904
    Jurnal Teknologi Infomasi, Komunikasi dan Elektronika (JTIKE); Vol. 2 No. 1 (2021): Juni 2021 - Sekarang; 8-13



    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English


    Classification :

    DDC:    629







    Self Driving Robot using Neural Network

    Mogaveera, Akshay / Giri, Ritwik / Mahadik, Mihir et al. | IEEE | 2018


    VEHICLE CLASSIFICATION USING THE CONVOLUTION NEURAL NETWORK APPROACH

    Janak TRIVEDI / Mandalapu Sarada DEVI / Dave DHARA | DOAJ | 2021

    Free access