This paper presents a comparison between twin- delayed Deep Deterministic Policy Gradient (TD3) and Soft Actor-Critic (SAC) reinforcement learning algorithms in the context of training robust navigation policies for Jackal robots. By leveraging an open-source framework and custom motion control environments, the study evaluates the performance, robustness, and transferability of the trained policies across a range of scenarios. The primary focus of the experiments is to assess the training process, the adaptability of the algorithms, and the robot’s ability to navigate in previously unseen environments. Moreover, the paper examines the influence of varying environ- ment complexities on the learning process and the generalization capabilities of the resulting policies. The results of this study aim to inform and guide the development of more efficient and practical reinforcement learning-based navigation policies for Jackal robots in real-world scenarios.
A Comparative Study of Twin Delayed Deep Deterministic Policy Gradient and Soft Actor-Critic Algorithms for Robot Exploration and Navigation in Unseen Environments
Conference paper
Electronic Resource
English
DDC: | 629 |
Spacecraft Motion Planning Based on the Twin Delayed Deep Deterministic Policy Gradient Algorithm
Springer Verlag | 2025
|Actor-Critic Policy Learning in Cooperative Planning
AIAA | 2010
|Actor-Critic Policy Learning in Cooperative Planning
British Library Conference Proceedings | 2010
|Multiagent Soft Actor–Critic for Traffic Light Timing
ASCE | 2023
|