This paper presents a comparison between twin- delayed Deep Deterministic Policy Gradient (TD3) and Soft Actor-Critic (SAC) reinforcement learning algorithms in the context of training robust navigation policies for Jackal robots. By leveraging an open-source framework and custom motion control environments, the study evaluates the performance, robustness, and transferability of the trained policies across a range of scenarios. The primary focus of the experiments is to assess the training process, the adaptability of the algorithms, and the robot’s ability to navigate in previously unseen environments. Moreover, the paper examines the influence of varying environ- ment complexities on the learning process and the generalization capabilities of the resulting policies. The results of this study aim to inform and guide the development of more efficient and practical reinforcement learning-based navigation policies for Jackal robots in real-world scenarios.


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    A Comparative Study of Twin Delayed Deep Deterministic Policy Gradient and Soft Actor-Critic Algorithms for Robot Exploration and Navigation in Unseen Environments


    Beteiligte:

    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Klassifikation :

    DDC:    629




    Soft Actor-Critic Deep Reinforcement Learning for Fault Tolerant Flight Control

    Dally, Killian / Kampen, Erik-Jan Van | TIBKAT | 2022


    Actor-Critic Policy Learning in Cooperative Planning

    Redding, Joshua / Geramifard, Alborz / Choi, Han-Lim et al. | AIAA | 2010


    Multiagent Soft Actor–Critic for Traffic Light Timing

    Wu, Lan / Wu, Yuanming / Qiao, Cong et al. | ASCE | 2023


    Stepwise Soft Actor–Critic for UAV Autonomous Flight Control

    Ha Jun Hwang / Jaeyeon Jang / Jongkwan Choi et al. | DOAJ | 2023

    Freier Zugriff