Simulation of the real world is a widely researched topic in various fields. The automotive industry in particular is very dependent on real world simulations, since these simulations are needed in order to prove the safety of advance driver assistance systems (ADAS) and autonomous driving (AD). In this paper we propose a deep learning based model for simulating the outputs from production sensors used in autonomous vehicles. We introduce an improved Recurrent Conditional Generative Adversarial Network (RC-GAN) consisting of Recurrent Neural Networks (RNNs) that use Long Short-Term Memory (LSTM) in both the generator and the discriminator networks in order to generate production sensor errors that exhibit long-term temporal correlations. The network is trained in a sequence-to-sequence fashion where we condition the output from the model on sequences describing the surrounding environment. This enables the model to capture spatial and temporal dependencies, and the model is used to generate synthetic time series describing the errors in a production sensor which can be used for more realistic simulations. The model is trained on a data set collected from real roads with various traffic settings, and yields significantly better results as compared to previous works.
Recurrent Conditional Generative Adversarial Networks for Autonomous Driving Sensor Modelling
01.10.2019
538655 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
REALISTIC ULTRASONIC ENVIRONMENT SIMULATION USING CONDITIONAL GENERATIVE ADVERSARIAL NETWORKS
British Library Conference Proceedings | 2019
|On Generating Parametrised Structural Data Using Conditional Generative Adversarial Networks
British Library Conference Proceedings | 2021
|