The need for path planning in environments for autonomous vehicles is greatly elevated in scenarios of high stress or intensity prone to human error. Deep reinforcement learning has been used as a possible mechanism to train a policy to predict actions given state information from the environment. In this work, we perform a comparative analysis of different representations in a multi-obstacle environment. We look at two different representations, world-based and ownship-based; sorting versus not sorting obstacle information based upon immediate threat; curriculum learning for increasing environment difficulty over time; and supervised pre-training for faster convergence. Our results showed that the ownship representation outperformed the world representation with statistical significance across a multitude of metrics; sorted outperformed non-sorted in terms of yielding better median results only for the global-based representation; curriculum learning outperformed baseline learning in terms of yielding better median results also only for the global-based representation; and lastly, supervised pre-training failed to transfer to reinforcement learning for both representations.
Multi-Obstacle Path Planning using Deep Reinforcement Learning
2024-09-29
2245714 byte
Conference paper
Electronic Resource
English