Collisions between ships seriously threaten the safety of maritime traffic. Meanwhile, more than 80% of maritime accidents are related to human factors. To realize the autonomous collision avoidance of Unmanned Surface Vehicles (USVs), in this paper, we propose an autonomous collision avoidance method based on Deep Reinforcement Learning (DRL). Firstly, in order to enable the ship to take collision avoidance actions at the appropriate time and reach the target point as soon as possible, two navigation states are determined based on Quaternion Ship Domain (QSD), that is, the goal-oriented state and the collision avoidance state. Then, we design different state spaces for dynamic and static obstacles to reduce the input of redundant information and speed up the convergence of the algorithm. In addition, COLREGs and navigation practices are taken into account when designing the reward function, so that the agent’s operation is consistent with good seamanship. Finally, on the basis of the Deep Q Network (DQN), the most representative algorithm in DRL, we design experiments of static obstacle scenarios and a variety of dynamic encounter scenarios to test the rationality and effectiveness of the algorithm. The experimental results show that the USV can reach the target point without collision by using the algorithm proposed in this paper, and the decision made by the USV conforms to COLEREGs and navigation experience. This indicates that the proposed algorithm can provide support for ship autonomous collision avoidance decision-making.
COLREGs-compliant autonomous collision avoidance method based on deep reinforcement learning for USVs
2023-08-04
2212467 byte
Conference paper
Electronic Resource
English
COLREGs-Compliant Collision Avoidance Method for Autonomous Ships via Deep Reinforcement Learning
Springer Verlag | 2022
|COLREGs-Compliant Collision Avoidance Method for Autonomous Ships via Deep Reinforcement Learning
British Library Conference Proceedings | 2022
|Collision-avoidance under COLREGS for unmanned surface vehicles via deep reinforcement learning
Taylor & Francis Verlag | 2020
|