It is fair to assume that when humans drive, they rely on their prior knowledge to predict the behaviors of other vehicles. In order for a self-driving car to be driven safely with other surrounding cars, it must behave in a predictable way, similar to the behaviors of human-driven cars. It is common to adopt machine learning methods which have been considerably improved to predict behaviors of vehicles with big data. The more scenarios the model covers, the larger the model becomes. As a result, it is inevitable that the model loses interpretability. On the other hand, although a simple model represented by linear combination of features can be interpretable, it can cover only a limited scenario. In order to solve these limitations, we propose a method to build multiple linear models and a model to select appropriate one of them on the situation. This set of models is extracted from a trained machine learning model. The proposed approach was validated using real-world driving data, namely Next Generation Simulation (NGSIM).


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Interpretable Driver Models Discovery in Data


    Contributors:


    Publication date :

    2020-09-20


    Size :

    2225963 byte




    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English





    DRIVER BEHAVIOUR MODELS AND DRIVER SUPPORT

    Peters, B. / Nilsson, L. / European Commission | British Library Conference Proceedings | 2005


    Understanding Ridesplitting Behavior with Interpretable Machine Learning Models Using Chicago Transportation Network Company Data

    Abkarian, Hoseb / Chen, Ying / Mahmassani, Hani S. | Transportation Research Record | 2021