The port and shipping industry urgently needs to accelerate the transition to green and intelligent technologies in the post-pandemic era. Ship collision avoidance remains a pivotal challenge in achieving intelligent navigation. This paper proposes a deep reinforcement learning-based method for ship collision avoidance path planning in dynamic environments. Initially, a two-dimensional grid-based spatial environment is established to model ship domains, with collision risk levels assessed based on Automatic Identification System (AIS) data and the International Regulations for Preventing Collisions at Sea (COLREGs). The ship collision avoidance problem is formulated as a Markov Decision Process (MDP), wherein the observation space, action space, and reward function during collision avoidance are clearly defined. Utilizing this MDP framework, the Dueling Double Deep Q-network (D3QN) algorithm is employed to derive collision avoidance decisions. The algorithm incorporates prioritized experience replay and an adaptive decay greedy exploration strategy to enhance training efficiency. Simulation experiments are conducted across multiple encounter scenarios under collision regulations. The results substantiate the effectiveness of the proposed deep reinforcement learning approach for ship collision avoidance path planning.
Deep Reinforcement Learning Based Path Planning and Collision Avoidance for Smart Ships in Complex Environments
20.09.2024
1641408 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
COLREGs-Compliant Collision Avoidance Method for Autonomous Ships via Deep Reinforcement Learning
British Library Conference Proceedings | 2022
|COLREGs-Compliant Collision Avoidance Method for Autonomous Ships via Deep Reinforcement Learning
Springer Verlag | 2022
|