It is fair to assume that when humans drive, they rely on their prior knowledge to predict the behaviors of other vehicles. In order for a self-driving car to be driven safely with other surrounding cars, it must behave in a predictable way, similar to the behaviors of human-driven cars. It is common to adopt machine learning methods which have been considerably improved to predict behaviors of vehicles with big data. The more scenarios the model covers, the larger the model becomes. As a result, it is inevitable that the model loses interpretability. On the other hand, although a simple model represented by linear combination of features can be interpretable, it can cover only a limited scenario. In order to solve these limitations, we propose a method to build multiple linear models and a model to select appropriate one of them on the situation. This set of models is extracted from a trained machine learning model. The proposed approach was validated using real-world driving data, namely Next Generation Simulation (NGSIM).
Interpretable Driver Models Discovery in Data
2020-09-20
2225963 byte
Conference paper
Electronic Resource
English
Department Highlights - Discovery offers Bosch driver assistance
Online Contents | 2004
DRIVER BEHAVIOUR MODELS AND DRIVER SUPPORT
British Library Conference Proceedings | 2005
|Transportation Research Record | 2021
|