The lifelong learning process in humans, encompassing both cross-domain and within-domain under changing conditions adaptability. It is similarly challenge the realm of artificial intelligence. This issue is particularly pronounced in autonomous driving decision-making, where vehicles must face not only diverse standard environments but also dynamically changing conditions. This paper introduces a novel approach where competence-based unsupervised reinforcement learning (RL) is employed to identify correlations between different policies under real-time varying conditions. Such correlation discovery serves as a foundation for continual learning, further constructing a lifelong learning paradigm. We present detailed theoretical proof of this innovative approach, demonstrating significant improvements over traditional RL baselines.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    From Unsupervised Reinforcement Learning to Continual Reinforcement Learning: Leading Learning from the Relevance to the Whole of Autonomous Driving Decision-Making


    Beteiligte:
    Ma, Zhenyu (Autor:in) / Cui, Yixin (Autor:in) / Huang, Yanjun (Autor:in)


    Erscheinungsdatum :

    24.09.2024


    Format / Umfang :

    5271604 byte





    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch





    Human-Guided Continual Learning for Personalized Decision-Making of Autonomous Driving

    Yang, Haohan / Zhou, Yanxin / Wu, Jingda et al. | IEEE | 2025



    A Decision-Making Method for Connected Autonomous Driving Based on Reinforcement Learning

    Zhang, Mingheng / Wan, Xing / Lv, Xinfei et al. | British Library Conference Proceedings | 2020