The lifelong learning process in humans, encompassing both cross-domain and within-domain under changing conditions adaptability. It is similarly challenge the realm of artificial intelligence. This issue is particularly pronounced in autonomous driving decision-making, where vehicles must face not only diverse standard environments but also dynamically changing conditions. This paper introduces a novel approach where competence-based unsupervised reinforcement learning (RL) is employed to identify correlations between different policies under real-time varying conditions. Such correlation discovery serves as a foundation for continual learning, further constructing a lifelong learning paradigm. We present detailed theoretical proof of this innovative approach, demonstrating significant improvements over traditional RL baselines.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    From Unsupervised Reinforcement Learning to Continual Reinforcement Learning: Leading Learning from the Relevance to the Whole of Autonomous Driving Decision-Making


    Contributors:
    Ma, Zhenyu (author) / Cui, Yixin (author) / Huang, Yanjun (author)


    Publication date :

    2024-09-24


    Size :

    5271604 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English





    Human-Guided Continual Learning for Personalized Decision-Making of Autonomous Driving

    Yang, Haohan / Zhou, Yanxin / Wu, Jingda et al. | IEEE | 2025


    A Decision-Making Method for Connected Autonomous Driving Based on Reinforcement Learning

    Zhang, Mingheng / Wan, Xing / Lv, Xinfei et al. | British Library Conference Proceedings | 2020