The neural network policies are widely explored in the autonomous driving field, thanks to their capability of handling complicated driving tasks. However, the practical deployment of such policies is slowed down due to their lack of robustness against modeling gap and external disturbances. In our prior work, we proposed a planner-controller architecture and applied a disturbance-observer-based (DOB) robust tracking controller to reject the disturbances and achieved zero-shot policy transfer. In this paper, we present our latest progress on improving the policy transfer performance under this framework. Concretely, we applied adaptive DOB, so as to more accurately model the inverse system dynamics and increase the cut-off frequency of the Q-filter in the DOB. A closed-loop reference path smoothing algorithm is introduced to alleviate the step disturbance input imposed by the reference trajectory re-planning. On the neural network control policy side, we applied the parallel attribute networks, a hierarchical modular policy network to dynamically handle various driving tasks. We have carried out various simulations and experiments to validate the capability of our proposed method to achieve sim-to-sim and sim-to-real policy transfer. The proposed method achieves the most outstanding performance among a series of baseline control schemes.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Disturbance-Observer-Based Tracking Controller for Neural Network Driving Policy Transfer


    Contributors:
    Tang, Chen (author) / Xu, Zhuo (author) / Tomizuka, Masayoshi (author)


    Publication date :

    2020-09-01


    Size :

    2581222 byte




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English